| |

California SB 53: Catastrophic AI Risk Definitions and What They Mean for Your Compliance Program

Target Audience: Compliance Officers, General Counsel, CISOs (California operations)
Category: Regulatory / State Law
Evidence Tier: Secondary Verified (California legislative text, analysis)
Confidence Level: High

Executive Summary

California SB 53, enacted in 2025 and effective in 2026, establishes legal definitions for “critical harm” and “catastrophic harm” caused by AI systems. These definitions trigger disclosure, reporting, and safety obligations for AI developers and deployers operating in California—one of the world’s largest AI markets.

This article provides: Definitions of critical vs. catastrophic harm | Which AI systems are in scope | Disclosure and reporting obligations | Interaction with EU AI Act and other frameworks


What SB 53 Does

SB 53 (Chaptered September 2025, effective 2026) amends California’s existing AI liability framework to define specific harm thresholds that trigger legal obligations.

ProvisionSummary
Critical Harm DefinitionHarm that is severe but not widespread—e.g., significant financial loss, serious injury, substantial privacy violation
Catastrophic Harm DefinitionHarm that is both severe and widespread—e.g., mass casualty events, systemic economic disruption, critical infrastructure failure
Disclosure ObligationsDevelopers must disclose known catastrophic risk capabilities to state authorities
Safety RequirementsAI systems capable of causing catastrophic harm must implement specified safeguards
Reporting TriggersIncidents meeting critical or catastrophic thresholds require reporting

Definitions: Critical vs. Catastrophic Harm

Harm LevelLegal Definition (SB 53)Examples
Critical HarmHarm that is severe in magnitude but limited in scopeSingle fatality, serious injury requiring hospitalization, >$10M financial loss to an entity, large-scale PII breach
Catastrophic HarmHarm that is both severe in magnitude and widespread in scopeMass casualty event (>10 fatalities), critical infrastructure failure affecting >100,000 people, systemic economic disruption (>$1B aggregate loss)

Triggering Events

EventCritical HarmCatastrophic Harm
Physical injurySerious injury, 1 fatality → ☑>10 fatalities → ☑
Financial loss>$10M single entity → ☑>$1B aggregate → ☑
Privacy violationLarge-scale PII breach → ☑Systemic identity compromise → ☑
InfrastructureService disruption >24h → ☑Critical infrastructure failure → ☑

Which AI Systems Are In Scope?

Scope Determined by Capability, Not Intent

SB 53 applies to any AI system that:

  • Is deployed or made available in California
  • Has technical capability to cause critical or catastrophic harm
  • Regardless of whether harm was intended or occurred

Capability Thresholds

CapabilityIn Scope?
AI system that can autonomously control critical infrastructure☑ Yes (catastrophic potential)
AI system that can generate bio-weapons recipes☑ Yes (catastrophic potential)
AI system that can execute large-scale financial fraud☑ Yes (critical potential)
AI system that can cause serious injury via physical actions☑ Yes (critical potential)
General-purpose chatbot with no autonomous actions☐ No (below threshold)

Compute Thresholds (Indicative)

While SB 53 does not set fixed compute thresholds (unceptively, the law focuses on capabilities, not FLOPS), California regulators have signaled interest in models requiring >10^25 FLOPs of training compute as presumptively capable of catastrophic harm.


Compliance Obligations

For AI Developers

ObligationRequirementDeadline
Catastrophic risk disclosureNotify California Office of AI Safety of catastrophic-capable systemsWithin 30 days of capability determination
Safety case documentationDocument safeguards sufficient to prevent catastrophic harmPrior to deployment
Third-party assessmentCatastrophic-capable systems require independent safety auditAnnually
Incident reportingAny critical or catastrophic harm incidentWithin 72 hours

For AI Deployers (Enterprises Using AI)

ObligationRequirementApplicability
Risk assessmentEvaluate if deployed AI can cause critical/catastrophic harmAll AI systems
Safeguard implementationControls to prevent harm escalationIf risk assessment indicates potential
MonitoringContinuous monitoring for harm indicatorsHigh-risk deployments
ReportingNotify developer if critical/catastrophic harm occursAlways

Disclosure Requirements: What to Report

For Catastrophic-Capable Systems

Information RequiredDetail Level
System descriptionArchitecture, capabilities, intended use
Capability assessmentTechnical basis for catastrophic determination
Safety measuresTechnical and organizational safeguards
Testing resultsRed teaming, adversarial testing, boundary testing
Incident historyAny prior critical or catastrophic incidents
Access controlsWho can deploy, modify, or use the system

For Critical Harm Incidents

Information RequiredDetail Level
Incident descriptionWhat happened, when, scope
Harm assessmentClassification (critical vs. catastrophic)
Root causeWhy the system caused harm
RemediationSteps taken to prevent recurrence
NotificationAffected parties and regulators notified

Interaction with Other Frameworks

FrameworkSB 53 Interaction
EU AI ActSB 53 catastrophic harm aligns with EU prohibited practices (Art. 5) and high-risk with serious incident reporting (Art. 65). Organizations complying with EU AI Act will have substantial overlap with SB 53.
NIST AI RMFSB 53 safety case requirements map to MANAGE function (risk management).
Colorado AI ActColorado focuses on algorithmic discrimination; SB 53 focuses on catastrophic harm. Distinct but overlapping for high-risk systems.
GDPRCritical/catastrophic harm incidents likely qualify as personal data breaches requiring 72-hour notification under Art. 33.

📌 Notably Absent

SB 53’s definitions of critical and catastrophic harm are legally novel—there is no body of case law interpreting these terms. Regulators and courts will define the boundaries over time.

No enforcement actions have been issued under SB 53 as of April 2026 (law became effective in 2026, early in enforcement cycle). However, California’s Office of AI Safety is actively developing implementing regulations.

The disclosure obligations apply to “capability” not “occurrence”—developers must disclose catastrophic-capable systems even if no harm has occurred. This is an affirmative obligation, not reactive.


Actionable Compliance Roadmap

Phase 1: Assessment (Weeks 1-4)

ActionOwner
Inventory all AI systems deployed or available in CaliforniaCompliance + AI Team
Assess each system for critical/catastrophic harm capabilityAI Risk Assessment
Document capability assessment methodologyLegal + Technical
Identify systems requiring disclosureCCO

Phase 2: Disclosure and Safeguards (Weeks 5-8)

ActionOwner
Prepare catastrophic capability disclosures (if applicable)Legal + AI Team
Submit disclosures to California Office of AI SafetyCCO
Implement or verify safeguards for catastrophic-capable systemsSecurity
Establish incident reporting procedures for critical/catastrophic harmIncident Response

Phase 3: Ongoing (Continuous)

ActionOwner
Annual third-party safety audit (catastrophic-capable systems)Audit + Security
Monitor California implementing regulationsLegal
Update disclosures when capabilities changeAI Team
Test incident reporting procedures via tabletopIR Team

The Bottom Line for California Operators

California SB 53 creates a novel regulatory regime based on AI capability rather than actual harm. Organizations developing or deploying advanced AI systems in California must:

  1. Assess capability – Can your AI cause critical or catastrophic harm?
  2. Disclose proactively – Don’t wait for harm to occur; disclose capability
  3. Implement safeguards – Controls proportionate to capability
  4. Report incidents – 72-hour timeline for critical/catastrophic harm

The most important action: conduct a good-faith capability assessment for every AI system. Document your methodology. If you determine a system is not catastrophic-capable, keep the assessment as audit evidence.


Discover more from ODA3 Institute | AI Security Institute

Subscribe to get the latest posts sent to your email.