What to do when AI goes wrong in your business
Unlock full access

AI Incident Type 1: Data Exposure via AI Tool
Scenario: An employee submits restricted data (client information, patient records, financial data, proprietary information) to an AI tool in violation of policy, or an approved AI tool experiences a data breach.
IMMEDIATE ACTIONS
1. Identify the tool, the employee, and the data submitted. What specifically was entered?
2. Determine whether the AI tool has a data deletion capability. Contact the vendor immediately to request deletion of submitted data.
3. Document the incident with timestamps — what was submitted, when, by whom, to which tool.
4. Determine whether the tool retains data or uses it for training. Review the vendor’s privacy policy and your data processing agreement.
5. Assess whether the exposed data constitutes a breach under Arizona ARS § 18-552 or applicable compliance frameworks (HIPAA, PCI-DSS).
6. Engage legal counsel if regulated data is involved or if breach notification may be required.
NOTIFICATION CONSIDERATIONS
• If HIPAA-covered data was submitted: engage breach counsel immediately; HIPAA’s 60-day notification clock may have started.
• If personal information per ARS § 18-552 was involved: Arizona’s 45-day notification clock may have started.
• If client proprietary information was involved: review client contract for data breach notification requirements.
• If the vendor’s own systems were breached: vendor is responsible for notification, but you should verify.
REMEDIATION
• Revoke the employee’s access to the AI tool pending review.
• Conduct employee training or disciplinary action per HR policy.
• Update AI acceptable use policy if the incident revealed a gap.
• Consider technical controls that prevent future data submission (data loss prevention tools, network filtering).
• Review whether other employees may have used the same tool in the same way.
AI Incident Type 2: AI-Generated Misinformation Distributed to Clients
Scenario: AI-generated content containing factual errors, fabricated citations, incorrect legal or financial information, or other inaccuracies was sent to clients or published externally before being verified.
IMMEDIATE ACTIONS
7. Identify every client, platform, or channel that received the inaccurate content.
8. Preserve the original AI output and all related communications.
9. Do not delete or alter records — this may be relevant to legal or insurance proceedings.
10. Convene with legal counsel to assess liability exposure before communicating with affected clients.
11. Draft a correction or clarification with legal review before sending.
CLIENT COMMUNICATION
The communication to affected clients should:
• Acknowledge the error directly without excessive qualification.
• Clearly state what the correct information is.
• Not characterize the error in terms that create additional legal exposure (avoid admissions of negligence before legal review).
• Provide a contact for follow-up questions.
In professional services (legal, financial, medical), incorrect information distributed to clients may trigger malpractice reporting obligations. Confirm with legal counsel before communicating.
REMEDIATION
• Implement or strengthen AI output verification requirements.
• Identify whether the employee followed the existing verification policy. If yes, the policy is insufficient. If no, address the compliance gap.
• Add the specific error type to training content so employees recognize similar failures.
• Review all AI-generated content distributed in the prior 30 days for similar issues if the error type is systemic.
AI Incident Type 3: AI-Assisted Social Engineering / BEC Attack
Scenario: Employees received sophisticated AI-generated phishing, voice cloning, or deepfake video fraud targeting your business. Funds may have been transferred or credentials compromised.
IMMEDIATE ACTIONS — FINANCIAL FRAUD
12. Contact your bank immediately if funds were transferred. Request a wire recall. Time is critical — transfers that have been processed for 24+ hours are typically unrecoverable.
13. Contact your cyber insurance carrier’s emergency line.
14. File an FBI IC3 complaint at ic3.gov. For large-dollar wire fraud, contact the FBI Phoenix Field Office directly: (623) 466-1999.
15. Preserve all communications, call logs, and voicemails related to the fraud.
16. Do not delete anything — evidence is required for insurance claims and law enforcement.
IMMEDIATE ACTIONS — CREDENTIAL COMPROMISE
17. Reset all potentially compromised credentials immediately.
18. Revoke and reissue sessions for all accounts associated with compromised credentials.
19. Enable or verify MFA on all affected accounts.
20. Review system logs for unauthorized access in the period since the phishing event.
21. Engage IT forensics if the scope of access is unclear.
REMEDIATION
• Implement call-back verification for all payment instruction changes if not already in place.
• Update security awareness training to cover the specific AI attack vector used.
• Conduct simulated AI phishing exercises if technically feasible.
• Review whether technical controls (email authentication, AI-powered email filtering) would have caught the attack.
AI Incident Type 4: AI Tool Vendor Breach
Scenario: An AI tool your business uses is breached by a third party, and data your employees submitted to that tool may have been exposed.
IMMEDIATE ACTIONS
22. Obtain all available information from the vendor: what data was accessed, what time period, what the vendor knows so far.
23. Do NOT wait for the vendor’s investigation to complete before taking protective action. Begin your own assessment.
24. Identify all employees who used the affected tool and what data they submitted.
25. Assess whether submitted data constitutes regulated personal information or confidential client data.
26. Engage legal counsel to assess breach notification obligations under Arizona law and applicable compliance frameworks.
27. Suspend use of the affected tool pending vendor response.
VENDOR MANAGEMENT
• Request written confirmation of: what data was accessed, what the vendor is doing to remediate, what notification the vendor will provide.
• Review your Data Processing Agreement (if you have one) for vendor breach notification obligations.
• Determine whether the vendor’s breach constitutes a covered incident under your cyber insurance policy.
• Make a documented decision about whether to continue using the tool after remediation, and what conditions must be met.
Post-Incident Documentation Template
Complete within 30 days of any AI incident:
Incident type and brief description |
|
Date and time of incident / discovery |
|
Data or systems involved |
|
Individuals / clients / systems affected |
|
Immediate actions taken (with dates) |
|
Notifications made (who, when, method) |
|
Regulatory reporting required? Completed? |
|
Insurance claim filed? Claim number |
|
Root cause analysis |
|
Policy or control gaps identified |
|
Remediation actions implemented |
|
New controls or policy changes made |
|
Training updates required |
|
Incident closed date |
|
Reviewed by |
|
AEGITz FLOW clients receive AI incident response support as part of their engagement — including vendor contact management, data handling review, and insurance documentation assistance. Contact us at aegitz.com.



