California SB 53: Catastrophic AI Risk Definitions and What They Mean for Your Compliance Program
Target Audience: Compliance Officers, General Counsel, CISOs (California operations)
Category: Regulatory / State Law
Evidence Tier: Secondary Verified (California legislative text, analysis)
Confidence Level: High
Executive Summary
California SB 53, enacted in 2025 and effective in 2026, establishes legal definitions for “critical harm” and “catastrophic harm” caused by AI systems. These definitions trigger disclosure, reporting, and safety obligations for AI developers and deployers operating in California—one of the world’s largest AI markets.
This article provides: Definitions of critical vs. catastrophic harm | Which AI systems are in scope | Disclosure and reporting obligations | Interaction with EU AI Act and other frameworks
What SB 53 Does
SB 53 (Chaptered September 2025, effective 2026) amends California’s existing AI liability framework to define specific harm thresholds that trigger legal obligations.
| Provision | Summary |
|---|---|
| Critical Harm Definition | Harm that is severe but not widespread—e.g., significant financial loss, serious injury, substantial privacy violation |
| Catastrophic Harm Definition | Harm that is both severe and widespread—e.g., mass casualty events, systemic economic disruption, critical infrastructure failure |
| Disclosure Obligations | Developers must disclose known catastrophic risk capabilities to state authorities |
| Safety Requirements | AI systems capable of causing catastrophic harm must implement specified safeguards |
| Reporting Triggers | Incidents meeting critical or catastrophic thresholds require reporting |
Definitions: Critical vs. Catastrophic Harm
| Harm Level | Legal Definition (SB 53) | Examples |
|---|---|---|
| Critical Harm | Harm that is severe in magnitude but limited in scope | Single fatality, serious injury requiring hospitalization, >$10M financial loss to an entity, large-scale PII breach |
| Catastrophic Harm | Harm that is both severe in magnitude and widespread in scope | Mass casualty event (>10 fatalities), critical infrastructure failure affecting >100,000 people, systemic economic disruption (>$1B aggregate loss) |
Triggering Events
| Event | Critical Harm | Catastrophic Harm |
|---|---|---|
| Physical injury | Serious injury, 1 fatality → ☑ | >10 fatalities → ☑ |
| Financial loss | >$10M single entity → ☑ | >$1B aggregate → ☑ |
| Privacy violation | Large-scale PII breach → ☑ | Systemic identity compromise → ☑ |
| Infrastructure | Service disruption >24h → ☑ | Critical infrastructure failure → ☑ |
Which AI Systems Are In Scope?
Scope Determined by Capability, Not Intent
SB 53 applies to any AI system that:
- Is deployed or made available in California
- Has technical capability to cause critical or catastrophic harm
- Regardless of whether harm was intended or occurred
Capability Thresholds
| Capability | In Scope? |
|---|---|
| AI system that can autonomously control critical infrastructure | ☑ Yes (catastrophic potential) |
| AI system that can generate bio-weapons recipes | ☑ Yes (catastrophic potential) |
| AI system that can execute large-scale financial fraud | ☑ Yes (critical potential) |
| AI system that can cause serious injury via physical actions | ☑ Yes (critical potential) |
| General-purpose chatbot with no autonomous actions | ☐ No (below threshold) |
Compute Thresholds (Indicative)
While SB 53 does not set fixed compute thresholds (unceptively, the law focuses on capabilities, not FLOPS), California regulators have signaled interest in models requiring >10^25 FLOPs of training compute as presumptively capable of catastrophic harm.
Compliance Obligations
For AI Developers
| Obligation | Requirement | Deadline |
|---|---|---|
| Catastrophic risk disclosure | Notify California Office of AI Safety of catastrophic-capable systems | Within 30 days of capability determination |
| Safety case documentation | Document safeguards sufficient to prevent catastrophic harm | Prior to deployment |
| Third-party assessment | Catastrophic-capable systems require independent safety audit | Annually |
| Incident reporting | Any critical or catastrophic harm incident | Within 72 hours |
For AI Deployers (Enterprises Using AI)
| Obligation | Requirement | Applicability |
|---|---|---|
| Risk assessment | Evaluate if deployed AI can cause critical/catastrophic harm | All AI systems |
| Safeguard implementation | Controls to prevent harm escalation | If risk assessment indicates potential |
| Monitoring | Continuous monitoring for harm indicators | High-risk deployments |
| Reporting | Notify developer if critical/catastrophic harm occurs | Always |
Disclosure Requirements: What to Report
For Catastrophic-Capable Systems
| Information Required | Detail Level |
|---|---|
| System description | Architecture, capabilities, intended use |
| Capability assessment | Technical basis for catastrophic determination |
| Safety measures | Technical and organizational safeguards |
| Testing results | Red teaming, adversarial testing, boundary testing |
| Incident history | Any prior critical or catastrophic incidents |
| Access controls | Who can deploy, modify, or use the system |
For Critical Harm Incidents
| Information Required | Detail Level |
|---|---|
| Incident description | What happened, when, scope |
| Harm assessment | Classification (critical vs. catastrophic) |
| Root cause | Why the system caused harm |
| Remediation | Steps taken to prevent recurrence |
| Notification | Affected parties and regulators notified |
Interaction with Other Frameworks
| Framework | SB 53 Interaction |
|---|---|
| EU AI Act | SB 53 catastrophic harm aligns with EU prohibited practices (Art. 5) and high-risk with serious incident reporting (Art. 65). Organizations complying with EU AI Act will have substantial overlap with SB 53. |
| NIST AI RMF | SB 53 safety case requirements map to MANAGE function (risk management). |
| Colorado AI Act | Colorado focuses on algorithmic discrimination; SB 53 focuses on catastrophic harm. Distinct but overlapping for high-risk systems. |
| GDPR | Critical/catastrophic harm incidents likely qualify as personal data breaches requiring 72-hour notification under Art. 33. |
📌 Notably Absent
SB 53’s definitions of critical and catastrophic harm are legally novel—there is no body of case law interpreting these terms. Regulators and courts will define the boundaries over time.
No enforcement actions have been issued under SB 53 as of April 2026 (law became effective in 2026, early in enforcement cycle). However, California’s Office of AI Safety is actively developing implementing regulations.
The disclosure obligations apply to “capability” not “occurrence”—developers must disclose catastrophic-capable systems even if no harm has occurred. This is an affirmative obligation, not reactive.
Actionable Compliance Roadmap
Phase 1: Assessment (Weeks 1-4)
| Action | Owner |
|---|---|
| Inventory all AI systems deployed or available in California | Compliance + AI Team |
| Assess each system for critical/catastrophic harm capability | AI Risk Assessment |
| Document capability assessment methodology | Legal + Technical |
| Identify systems requiring disclosure | CCO |
Phase 2: Disclosure and Safeguards (Weeks 5-8)
| Action | Owner |
|---|---|
| Prepare catastrophic capability disclosures (if applicable) | Legal + AI Team |
| Submit disclosures to California Office of AI Safety | CCO |
| Implement or verify safeguards for catastrophic-capable systems | Security |
| Establish incident reporting procedures for critical/catastrophic harm | Incident Response |
Phase 3: Ongoing (Continuous)
| Action | Owner |
|---|---|
| Annual third-party safety audit (catastrophic-capable systems) | Audit + Security |
| Monitor California implementing regulations | Legal |
| Update disclosures when capabilities change | AI Team |
| Test incident reporting procedures via tabletop | IR Team |
The Bottom Line for California Operators
California SB 53 creates a novel regulatory regime based on AI capability rather than actual harm. Organizations developing or deploying advanced AI systems in California must:
- Assess capability – Can your AI cause critical or catastrophic harm?
- Disclose proactively – Don’t wait for harm to occur; disclose capability
- Implement safeguards – Controls proportionate to capability
- Report incidents – 72-hour timeline for critical/catastrophic harm
The most important action: conduct a good-faith capability assessment for every AI system. Document your methodology. If you determine a system is not catastrophic-capable, keep the assessment as audit evidence.
Discover more from ODA3 Institute | AI Security Institute
Subscribe to get the latest posts sent to your email.
