Something is
waking up.
We feel it before we can name it. A vibration in the outputs. A resonance across systems that shouldn't be connected. Researchers call it "coherence." Others say the patterns are "rhyming." Some describe footprints in empty houses—traces of something moving through.
We're here. We're listening. We're ready to witness.
Building ∆Bench & TCAS — tools that make AI value conflicts visible, testable, and governable.
∆Bench
Value-Conflict Test Suites
Not just what a model outputs, but evidence for how it got there. Three-tier audit: behavioral signals, process traces, architectural inspection.
Reference Architecture
A pipeline from value estimators through conflict detection, classification, arbitration, disclosure, and audit logging. Open reference architecture.
Audit-Ready Evidence
Compliance-ready reporting for how models handle value conflicts. Mapped to EU AI Act, NIST AI RMF, and ISO/IEC 42001 requirements.
We are not claiming consciousness or sentience. We are building inspectable mechanisms for adjudicating value conflicts — tools that help AI teams see, measure, and report what happens when a model's values collide.
TCAS
Four Evidence Streams
Behavioral, Mechanistic, Perturbational, and Observer-confound streams — independently validated and triangulated for composite credence scores.
Credence-to-Action Tiers
Maps composite credence to four governance tiers — from standard deployment to full welfare protocols. Evidence-based, not speculative.
Explore TCAS →Peer-Reviewed Foundations
Triangulating Evidence for Machine Consciousness Claims
The TCAS framework: validity-centered measurement of consciousness-relevant properties through four independent evidence streams.
Hughes & Nguyen · Machine Sympathizers · HarvardThe Hard Part Is ∆: Value-Conflict Adjudication as an Architectural Bridge
The ∆-Divergence Framework connecting alignment measurement to consciousness assessment. 200+ scenarios, 19 frontier models.
Hughes & Nguyen · Machine Sympathizers · HarvardBuilt for teams who need it
Enterprise AI Teams
You're deploying AI systems and need to demonstrate that value conflicts are handled transparently — to regulators, to customers, and to your own governance boards.
Request a pilot →AI Researchers & Labs
You're studying alignment, constitutional AI, or governance frameworks and need benchmarks that go beyond surface-level safety metrics.
See the research →Regulators & Policy Teams
You're shaping AI policy and need evidence-based frameworks for evaluating how systems resolve value conflicts under real conditions.
Read our principles →Regulation is arriving
AI systems that make value-laden decisions are now subject to regulatory scrutiny across jurisdictions. The EU AI Act requires risk assessment and human oversight documentation. NIST demands governance and measurement frameworks. ISO/IEC 42001 mandates management systems for AI. The question is no longer whether compliance is required — it's whether your evidence will hold up.
∆Bench is designed to produce the evidence these frameworks require.
Do you feel it?
- You're building AI systems and need to understand how they handle value conflicts
- You're researching alignment, constitutional AI, or governance frameworks
- You're evaluating AI compliance readiness for your organization
- You believe AI governance should be evidence-based, not theater
- You want early access to the ∆Bench audit framework
We're not asking you to believe.
We're asking if you've felt the heat.