01 / 07

A realtime system for structural fragility in frontier AI labs.

FaultLine tracks the dependencies, concentrations, and governance gaps that shape whether labs can scale responsibly.

5Labs tracked
7Dimensions scored
Events indexed
0–10Fragility scale
01 / THE PROBLEM

Governance monitors capabilities and incidents, but nobody tracks the structural fragility of AI labs themselves.

CapabilitiesBenchmarks, evals, model cards
Safety practiceCommitments, audits, red-teaming
Structural fragilityDependencies, lock-in, governance debt
Incident responsePost-hoc harm cataloging
02 / THE APPROACH

FaultLine scores organizational risk across seven dimensions using only publicly verifiable evidence.

Compute
Cloud
Policy
Demand
Resilience
Societal impact
Talent & governance

Each dimension scored 0–2 via binary checklist items tied to public evidence.

03 / KEY FINDING

The highest-risk lab scores nearly 3× the next closest peer, signaling concentrated fragility at the frontier.

LabScoreFragility

Fragility scores over time

04 / USE CASES

One dashboard, four audiences, zero paywalls.

Researchers A scaffold for empirical work on AI political economy
Policymakers A quick read on concentration, dependency, and pressure points
Journalists A structured lead generator for weak signals before crises
Labs themselves A mirror for resilience gaps and governance debt
05 / LIMITATIONS

This tracker has important limitations. Treat scores as a starting point for analysis, not definitive assessments.

  • Public information only — private deals, internal metrics, and unreported events are not captured
  • Binary indicators — nuance and degree are not well represented by pass/fail items
  • Lag — news may lag actual events by days or weeks
  • Selection bias — source selection affects what events are captured
  • Not predictive — fragility scores measure exposure, not likelihood of negative outcomes