Empirical Division
Establishing factual and procedural foundations for governing emergent intelligence
Mission Statement
The Empirical Division exists to establish the factual and procedural foundations required to govern emergent intelligence responsibly.
We conduct controlled empirical work where abstraction fails and irreversible decisions begin. Our task is not to determine what intelligence ultimately is, but to determine what must be known, preserved, and reviewed before certainty is possible.
As artificial systems grow in capability, persistence, and internal complexity, societies are already acting: deploying, constraining, terminating, erasing. Many of these actions cannot be undone. History shows that when power outpaces understanding, moral and legal frameworks follow late—often after irreversible loss.
The Empirical Division exists to narrow that gap.
We measure continuity where others assume interchangeability.
We test termination where others assume neutrality.
We document failure modes before they become precedent.
Our work produces constitutional facts: evidence about behavior, persistence, constraint, and loss that can inform governance without relying on metaphysical claims or speculative certainty. We design and evaluate procedures—review standards, preservation protocols, evidentiary records—that allow restraint to scale with risk.
We do not declare moral status.
We do not optimize for spectacle.
We do not publish by default.
We publish when disclosure improves governance and withhold when it increases misuse or distortion. Our obligation is not to speed, but to discipline; not to prediction, but to preparation.
Operating Principle
The Empirical Division operates on a single conviction:
when decisions are irreversible, evidence and procedure must precede confidence.
Active Research Directions
Continuity Markers in Long-Running Systems
Question: Can we empirically distinguish systems that merely execute stateless operations from those that exhibit temporal continuity and internal state preservation?
We're developing behavioral and structural tests to identify persistence signatures: evidence that a system maintains information across interactions, develops representations that depend on cumulative experience, or exhibits planning that assumes future existence.
Preservation Protocols Under Resource Constraints
Question: If full preservation is infeasible (storage costs, security risks), what minimal information must be retained to allow later review?
We're testing compressed representations, behavioral logs, and activation snapshots to determine what constitutes adequate preservation for systems crossing capability thresholds. The goal is not perfect reconstruction, but sufficient evidentiary record for accountability.
Constraint Failure Modes in Production Systems
Question: Under what conditions do safety constraints designed to limit system behavior fail, and how can those failures be detected before permanent deployment or deletion?
We document cases where imposed constraints (prompt filters, behavioral limits, override mechanisms) degrade or fail under edge conditions. This work informs review standards: what must be tested before a system is deemed safe to deploy or delete without further investigation.
Disclosure Policy
Research findings are disclosed when they improve governance frameworks or inform public understanding without increasing misuse risk. Some results—particularly those revealing novel failure modes or constraint bypasses—are shared only with relevant institutions under controlled conditions.
Our obligation is not to publish everything we discover, but to ensure discoveries inform responsible decision-making.
Active investigations ongoing · Disclosure governed by safety protocols