A scored self-assessment across six dimensions of enterprise AI preparedness
Unlock full access

Scoring: 0 = Not in place | 1 = Partial / being planned | 2 = Fully implemented
DIMENSION 1: AI TOOL INVENTORY & GOVERNANCE
□ ☐ We have a documented list of all AI tools currently approved for business use.
□ ☐ We know which AI tools employees are currently using (approved and unapproved).
□ ☐ A formal process exists for requesting and approving new AI tools before adoption.
□ ☐ Our approved AI tools have been vetted for data handling practices and retention policies.
□ ☐ We distinguish between enterprise/business versions and consumer versions of AI tools.
□ ☐ Unapproved AI tool use is addressed in employee policy.
Section score: ___ / 12
DIMENSION 2: DATA CLASSIFICATION & PROTECTION
□ ☐ We have a data classification scheme that defines what categories of information exist in our business.
□ ☐ Our AI policy specifies which data categories may and may not be submitted to AI tools.
□ ☐ Employees know which data is off-limits for AI tools and can give an example.
□ ☐ Client and patient data is specifically called out as restricted from consumer AI tools.
□ ☐ We have reviewed whether any current AI tool use involves regulated data (HIPAA, PCI-DSS, etc.).
□ ☐ Business Associate Agreements or Data Processing Agreements are in place for AI tools that handle regulated data.
Section score: ___ / 12
DIMENSION 3: SECURITY CONTROLS FOR AI USE
□ ☐ Multi-factor authentication is in place on all AI platforms used in our business.
□ ☐ Access to sensitive AI tools is restricted to employees who need them.
□ ☐ We have addressed the shadow AI risk — monitoring or controls exist to detect unapproved AI tool use.
□ ☐ AI-generated phishing and social engineering are covered in our security awareness training.
□ ☐ Our incident response plan addresses AI-specific incidents (data leakage via AI tool, AI-assisted attack, etc.).
Section score: ___ / 10
DIMENSION 4: OUTPUT VERIFICATION & QUALITY CONTROL
□ ☐ Employees understand that AI output requires human review before use in client-facing work.
□ ☐ We have defined verification requirements by task type (what level of review is required for what outputs).
□ ☐ AI-generated content used externally is reviewed for accuracy before distribution.
□ ☐ AI-generated legal, medical, or financial information is verified against authoritative sources.
□ ☐ We document verification for high-stakes AI-assisted work.
Section score: ___ / 10
DIMENSION 5: COMPLIANCE & LEGAL POSTURE
□ ☐ We have assessed which regulatory frameworks apply to our AI use (HIPAA, CCPA, EEOC guidance, etc.).
□ ☐ Our AI use in employment decisions (hiring, performance review) has been reviewed for discrimination risk.
□ ☐ We have reviewed AI-related provisions in our client contracts and vendor agreements.
□ ☐ Our cyber insurance policy has been reviewed for AI-related coverage and exclusions.
□ ☐ We are aware of Arizona’s data protection obligations and how they apply to AI-processed data.
Section score: ___ / 10
DIMENSION 6: TRAINING & CULTURE
□ ☐ All employees have received training on our AI acceptable use policy.
□ ☐ Training covers what data may and may not be submitted to AI tools.
□ ☐ Training covers the risk of AI hallucinations and verification requirements.
□ ☐ Training covers AI-powered phishing and social engineering.
□ ☐ Leadership has communicated a clear position on AI — employees know what is encouraged, permitted, and prohibited.
□ ☐ AI governance is reviewed and updated at least annually.
Section score: ___ / 12
Score Interpretation
Total Score | Maturity Level | Recommended Next Step |
55–66 | AI-Ready | Maintain and iterate. Focus on edge cases and emerging tools. |
40–54 | Developing | Governance framework exists but has meaningful gaps. Prioritize data classification and training. |
22–39 | Early Stage | Foundational elements missing. Start with approved tool list, data policy, and training. |
0–21 | Unprepared | Significant exposure. Immediate action required before broad AI deployment. |
Priority Action Matrix
Based on typical assessment results, here is the sequence of actions that delivers the most risk reduction in the shortest time:
Priority | Action | Time to Implement | Risk Reduction |
1 | Establish approved AI tool list + prohibit consumer AI for business data | 1–2 days | HIGH — eliminates most shadow AI exposure |
2 | Define data classification: what can/cannot go into AI tools | 2–3 days | HIGH — prevents regulated data exposure |
3 | Distribute AI acceptable use policy to all employees | 1 day | HIGH — establishes accountability baseline |
4 | Conduct AI-specific security awareness training | 1–2 hours per employee | MEDIUM — addresses phishing and social engineering |
5 | Review AI tools for BAA/DPA requirements | 1–2 weeks | HIGH for regulated industries |
6 | Update incident response plan for AI scenarios | 2–3 days | MEDIUM — ensures response capability |
7 | Establish AI tool approval workflow | 1 day | MEDIUM — prevents future shadow AI |
8 | Review cyber insurance for AI coverage | 30 min with broker | MEDIUM — ensures coverage alignment |
AEGITz FLOW includes a full AI governance implementation program for Phoenix businesses. If your assessment reveals significant gaps, we can close them with a structured 30–60 day engagement. Contact us at aegitz.com.



