Comprehensive audit of organizational readiness for AI adoption across People, Processes, and Technology
Critical pillar A2: IP/Rights/Legal Readiness scored 1.83/5.0 across departments. Primary HAI score capped at 68 (from potential 76) to prevent masking of material risk. This is intentionalβhigh operational readiness cannot justify weak legal/IP governance.
| Rank | Department | Human Score | AI Score | HAI Average | Status | Key Gaps |
|---|---|---|---|---|---|---|
| 1 | Marketing | 78 | 71 | 74.5 | Balanced Leader | IP/Legal (1.9), Security (2.6) |
| 2 | Operations | 75 | 68 | 71.5 | Balanced Leader | IP/Legal (2.0) |
| 3 | Distribution | 72 | 65 | 68.5 | Balanced Leader | IP/Legal (1.8), Data Asset (3.4) |
| 4 | Finance | 70 | 62 | 66 | Balanced Leader | IP/Legal (2.2), Integration (3.0) |
| 5 | PR | 68 | 59 | 63.5 | Human-Strong | IP/Legal (1.7), Data Asset (2.8), Security (2.2) |
| 6 | Production | 65 | 58 | 61.5 | Human-Strong | IP/Legal (1.6), Workflow Integration (2.7) |
| 7 | Sales | 62 | 52 | 57 | At-Risk | IP/Legal (1.5), Change Readiness (2.5), Tools (2.4) |
| 8 | Talent | 58 | 48 | 53 | At-Risk | IP/Legal (2.1), Skills (3.0), Measurement (2.6) |
The HAI Index uses an additive (averaging) methodology to ensure transparency: executives can see exactly which pillars moved the score. However, an additive model alone can mask critical risk β a department could score 85 overall while having catastrophic IP/Legal or security vulnerabilities.
The Floor Rule enforces that if any of three critical pillars (A2: IP/Legal, A5: Security, H5: Ethical Use) score below 2.0, the overall score is capped (default: 59) or penalized. This ensures governance risks are never hidden by operational strength.
Beyond the primary HAI Index, we compute a Synergy Score to highlight imbalance between Human and AI readiness.
A Synergy of 46 indicates moderate imbalance: while Human readiness (72) is reasonably strong, AI readiness (64) lags, creating friction. Strong departments like Marketing show better balance (synergy ~56), while Talent shows significant lag (~28).
Audit and document all AI training data rights, content licensing, and compliance requirements. Current score (1.83) is blocking overall readiness. Target: 3.0+ within 90 days.
Current avg 2.4/5.0. Implement redaction/masking, audit trails, and secure environments for AI tool usage. Engage InfoSec early. Target: 3.5+.
Both departments score 48-58. Design targeted 30-60-90 day upskilling (see People & Capability page). Include change leadership and AI workflow training.
Select a department to view pillar breakdown, gaps, and targeted recommendations
12 pillars Γ 8 departments. Red = Critical, Orange = At Risk, Yellow = Moderate, Green = Strong, Dark Green = Leader
| Pillar | Marketing | PR | Talent | Distribution | Production | Operations | Sales | Finance |
|---|
Role clusters, training needs, and 30-60-90 day enablement roadmap
Each role cluster is mapped to hard skills, soft skills, virtues, values, and a personalized learning development plan. Below are sample profiles for key roles across departments.
| Role Cluster | Department(s) | Current AI Fluency | Critical Skills Gap | 30-Day Focus | 60-90-Day Focus |
|---|---|---|---|---|---|
| Marketing Operations | Marketing, Operations | Intermediate | AI prompt ops, automation testing | ChatGPT/Claude workflows; campaign automation | Multi-model orchestration; GenAI ROI tracking |
| Content Creator | Marketing, PR | Beginner | AI ideation, image/video generation, editing | Midjourney, Runway ML, D-ID basics | Creative workflow integration; brand consistency at scale |
| Talent Manager | Talent, HR | Minimal | AI interviewing, skills assessment, consent frameworks | Talent screening AI; bias audit; consent protocols | Talent marketplace integration; ethical AI governance |
| Sales Operations | Sales | Minimal | Workflow automation, lead scoring, funnel ops | CRM automation 101; lead intelligence tools | Revenue intelligence; predictive analytics |
| Data/Compliance Officer | Operations, Finance | Beginner | AI audit trails, data governance, risk frameworks | AI risk assessment; compliance checklist | Third-party vendor AI audits; policy framework |
| Production Lead | Production | Limited | AI-assisted editing, effects, sound design | AI tools overview; workflow testing | Integrated production stack; IP safeguarding in AI workflows |
Coursera, LinkedIn Learning, Udemy courses on AI fundamentals, prompt engineering, specific tools. Allows flexibility; employees learn at own pace.
Weekly 60-min cohorts by role cluster. Hands-on experimentation, shared templates, vulnerability-safe space. Led by internal champions.
Weekly 30-min coaching sprints with a GFE-trained guide. Personalized troubleshooting, mindset support, accountability.
Apply learning in real campaigns/workflows. Real ROI, real feedback loops, rapid iteration. On-the-job training accelerates mastery.
10 high-impact workflows mapped for risk, dependencies, and mitigation
Multi-dept: Marketing, Legal, Talent (for creator disclosures)
Multi-dept: Talent, Legal, Operations
Multi-dept: Distribution, Legal, Production
4 workflows pose high risk but can be managed with controls and monitoring. See full Risk Matrix in downloadable CSV export.
Risk: Data privacy (prospect PII in AI systems), Score transparency, Bias (favors certain customer profiles)
Risk: Quality gates (AI outputs may not meet broadcast standards), Tool reliability (crashes, data loss), Attribution (credit to AI tools)
Risk: Model transparency (execs unsure how AI arrived at forecast), Data leakage (financial confidentiality), Audit trail (regulators require explainability)
Risk: Brand voice misalignment (AI tone doesn't match brand), Crisis response (AI auto-replies to sensitive posts), Misinformation (AI amplifies false narratives)
Current inventory, gaps, redundancies, and minimum viable secure stack recommendation
| Category | Current Tools | Department(s) | Usage Level | Integration Status | Known Gaps |
|---|---|---|---|---|---|
| Ideation & Brainstorming | ChatGPT (free/premium mix), Perplexity, Claude | Marketing, Production, Content | Active | Manual copy-paste | No enterprise licensing; audit trail missing |
| Content Writing & Copywriting | ChatGPT, Jasper (some), Grammarly Business | Marketing, PR | Active | Partial (Grammarly β Google Docs) | Jasper underused; brand voice inconsistency |
| Image & Visual Generation | Midjourney, DALL-E, Adobe Firefly, Canva | Marketing, Production, Design | Active | Manual exports | No version control; rights tracking unclear |
| Video & Editing | Adobe Premiere, DaVinci Resolve, RunwayML (pilot), D-ID (pilot) | Production, Marketing | Limited | None | AI editing tools not integrated into production pipeline |
| Automation & Workflow | Zapier, Make, IFTTT, HubSpot workflows | Marketing, Sales, Operations | Moderate | Partial | Redundant tools (Zapier + Make); no orchestration layer |
| CRM & Sales Ops | HubSpot, Salesforce (limited), Pipedrive (pilot) | Sales, Operations | Inconsistent | Poor data sync | Multiple CRMs; no single source of truth |
| Analytics & Measurement | Google Analytics 4, Tableau (underused), Looker Studio | Marketing, Operations, Finance | Basic | Manual data pulls | No real-time dashboards; Excel still primary tool |
| Data Privacy & Security | None (no AI-specific compliance tools) | Operations, Legal | Missing | N/A | No audit trails for AI tool usage; no DLP |
Tool to track AI tool usage, inputs, outputs, and compliance flags. Recommendation: Lakehouse AI or Humane Intelligence.
Monitor clipboard, file uploads, and prompt inputs. Recommendation: Nightfall, Forcepoint, or integrated InfoSec solution.
Centralized prompt library, version control, and A/B testing. Recommendation: LangChain + PromptFlow or Anthropic/OpenAI Prompt API.
Move from consumer ChatGPT to API-based access or VLM (Vertex AI, Azure OpenAI). Ensures audit trails, data retention control.
Consolidate: Standardize on Claude (via API) + reserve ChatGPT for consumer testing. Cost savings: $1,200/year.
Consolidate: Standardize on Zapier (broader integrations). Migrate Make workflows. Cost savings: $600/year.
Consolidate: Choose HubSpot (best for mid-market, AI-native). Migrate Salesforce/Pipedrive. Cost savings: $3,000/year.
Consolidate: Move to Looker Studio (free) + light Tableau for complex modeling. Cost savings: $8,000/year.
A lean, integrated set of tools that covers ideation, content creation, automation, and security for departments. Prioritize enterprise features (audit trails, data residency, SSO).
Breakdown: Claude API ($2K), Midjourney ($600), Runway ($2.4K), Zapier ($12K), HubSpot ($24K), n8n (self-hosted), Humanize ($10K), Nightfall ($15K), Google Workspace ($20K), training & setup ($93.6K).
Savings vs. ad-hoc consolidation: ~$40K/year.
Prioritized action plan to move HAI from 68 β 75+. Focus: IP/Legal, Security, and Team Enablement.
This roadmap is organized in three phases: Now (Days 1β30), Next (Days 31β60), Later (Days 61β90). Each item includes owner, effort estimate (S/M/L), impact, and which pillar it improves. Success metric: HAI score improves to 73+ and critical pillars (A2, A5, H5) reach 2.8+.
Customize weights, critical pillars, floor rule, and import/export data
Default: 59. If any critical pillar <2.0, HAI capped at this value regardless of average.
Score below this value triggers floor rule. Default: 2.0.
Default: All pillars weighted equally (1/12 each). Adjust to prioritize certain pillars.
Upload a CSV with columns: Department, Respondent, Pillar, Score (0-5), Timestamp. Download template CSV
This dashboard is currently loaded with realistic synthetic data representing a typical Film/Media organization. All scores, workflows, and recommendations are contextual and actionable.
The Human Γ AI Readiness Index (HAI) is a production-ready diagnostic tool designed for Film/Media organizations assessing AI adoption maturity.
Framework: 12 pillars (6 Human, 6 AI) scored 0-5, computed via an additive index with an optional floor rule to prevent masking critical risks. The Floor Rule is a key differentiatorβit ensures that governance gaps (IP/Legal, Security, Talent Safeguards) cannot be hidden by operational strength.
Design Principles: Transparency (every score has a formula), Stability (incomplete data doesn't break the model), and Actionability (every finding links to a concrete 90-day action).
Use Case: Executives and department heads use this to identify readiness gaps, prioritize upskilling, manage workflow risks, and allocate AI investment strategically over quarters. Recur quarterly to track progress and adapt roadmaps.