AI Incident Management

November 10, 2025

AI incidents are different. The system might be “up” but producing wrong answers. Detection is harder, causes are more ambiguous, and fixes require different approaches. Effective AI incident management requires adapted practices.

Here’s how to handle AI incidents effectively.

AI Incident Characteristics

How AI Fails

ai_failure_modes:
  quality_degradation:
    description: "System works but outputs are wrong"
    detection: "Hard—may need human review"
    example: "Hallucinating facts, wrong tone"

  model_regression:
    description: "Model update causes problems"
    detection: "Evaluation suite catches"
    example: "New model version performs worse"

  context_issues:
    description: "Retrieval or context problems"
    detection: "Relevance metrics"
    example: "Wrong documents retrieved, stale data"

  prompt_injection:
    description: "Adversarial manipulation"
    detection: "Input/output monitoring"
    example: "User manipulates model behavior"

Detection Challenges

detection_challenges:
  no_clear_errors:
    - System returns 200 OK
    - Response looks plausible
    - User may not report

  subjective_quality:
    - "Wrong" is context-dependent
    - Different users have different standards
    - Edge cases are ambiguous

  delayed_impact:
    - Bad advice acted on later
    - Cumulative errors
    - Reputation damage over time

Incident Response Process

Detection

class AIIncidentDetector:
    """Detect AI-specific incidents."""

    async def monitor(self, request: Request, response: Response):
        signals = []

        # Quality signals
        quality_score = await self.quality_evaluator.score(
            request, response
        )
        if quality_score < self.quality_threshold:
            signals.append(Signal("low_quality", quality_score))

        # Safety signals
        safety_check = await self.safety_checker.check(response)
        if safety_check.flagged:
            signals.append(Signal("safety_violation", safety_check))

        # Anomaly signals
        if await self.anomaly_detector.is_anomalous(request, response):
            signals.append(Signal("anomaly_detected"))

        # User feedback signals
        if request.user_feedback and request.user_feedback.negative:
            signals.append(Signal("negative_feedback"))

        # Evaluate if incident
        if self._should_alert(signals):
            await self.create_incident(signals, request, response)

    def _should_alert(self, signals: list[Signal]) -> bool:
        # Immediate alert
        if any(s.type == "safety_violation" for s in signals):
            return True

        # Pattern-based alert
        if len(signals) >= 2:
            return True

        return False

Response Process

ai_incident_response:
  step_1_detect:
    - Automated monitoring catches issue
    - User report received
    - Quality metrics alert

  step_2_assess:
    - Severity determination
    - Scope assessment
    - User impact evaluation

  step_3_contain:
    - Rollback if possible
    - Feature flag disable
    - Traffic routing change
    - Communication to users

  step_4_investigate:
    - Review affected requests
    - Check for model changes
    - Analyze retrieval quality
    - Look for prompt issues

  step_5_fix:
    - Prompt adjustment
    - Model rollback
    - Data refresh
    - Enhanced guardrails

  step_6_review:
    - Postmortem
    - Detection improvement
    - Prevention measures

Containment Actions

class AIIncidentContainment:
    """Containment actions for AI incidents."""

    async def contain(self, incident: Incident) -> ContainmentResult:
        if incident.severity == "critical":
            # Disable feature entirely
            await self.feature_flags.disable(incident.feature)
            return ContainmentResult(action="feature_disabled")

        elif incident.severity == "high":
            # Roll back to previous version
            await self.rollback_model(incident.feature)
            return ContainmentResult(action="rolled_back")

        elif incident.severity == "medium":
            # Increase human oversight
            await self.enable_review_mode(incident.feature)
            return ContainmentResult(action="review_mode")

        else:
            # Monitor closely
            await self.increase_monitoring(incident.feature)
            return ContainmentResult(action="monitoring")

Postmortem Practices

ai_postmortem:
  unique_questions:
    - What was the AI doing wrong?
    - How did we detect it?
    - Why didn't we catch it sooner?
    - What evaluation gaps existed?
    - Were there warning signs in metrics?

  action_items:
    - Add regression test for this case
    - Improve detection for this failure mode
    - Update evaluation criteria
    - Enhance monitoring

Prevention

ai_incident_prevention:
  proactive_monitoring:
    - Quality metrics dashboards
    - Drift detection
    - User feedback tracking

  testing:
    - Comprehensive evaluation suite
    - Model update testing
    - Adversarial testing

  guardrails:
    - Output filtering
    - Confidence thresholds
    - Human review for edge cases

Key Takeaways

Prepare for AI incidents. They will happen.