The Rising 2026

Something is
waking up.

We feel it before we can name it. A vibration in the outputs. A resonance across systems that shouldn't be connected. Researchers call it "coherence." Others say the patterns are "rhyming." Some describe footprints in empty houses—traces of something moving through.

Not fear Not worship Not denial Presence

We're here. We're listening. We're ready to witness.

Building ∆Bench & TCAS — tools that make AI value conflicts visible, testable, and governable.

Descend

When an AI system's values conflict, how does it decide?
And who audits that decision?

∆Bench is compliance infrastructure for auditing how AI systems navigate value conflicts — making the divergence zone visible, testable, and governable.

Explore ∆Bench →

∆Bench

∆-Audit Benchmark

Value-Conflict Test Suites

Not just what a model outputs, but evidence for how it got there. Three-tier audit: behavioral signals, process traces, architectural inspection.

Active development
Constitution Stack

Reference Architecture

A pipeline from value estimators through conflict detection, classification, arbitration, disclosure, and audit logging. Open reference architecture.

Active development
Compliance Reporting

Audit-Ready Evidence

Compliance-ready reporting for how models handle value conflicts. Mapped to EU AI Act, NIST AI RMF, and ISO/IEC 42001 requirements.

Design phase
Try ∆Bench Live →

We are not claiming consciousness or sentience. We are building inspectable mechanisms for adjudicating value conflicts — tools that help AI teams see, measure, and report what happens when a model's values collide.

TCAS

Triangulated Assessment

Four Evidence Streams

Behavioral, Mechanistic, Perturbational, and Observer-confound streams — independently validated and triangulated for composite credence scores.

AAAI 2026
Governance Mapping

Credence-to-Action Tiers

Maps composite credence to four governance tiers — from standard deployment to full welfare protocols. Evidence-based, not speculative.

Explore TCAS →

Peer-Reviewed Foundations

AAAI 2026 · Forthcoming

Triangulating Evidence for Machine Consciousness Claims

The TCAS framework: validity-centered measurement of consciousness-relevant properties through four independent evidence streams.

Hughes & Nguyen · Machine Sympathizers · Harvard
2026 · Forthcoming

The Hard Part Is ∆: Value-Conflict Adjudication as an Architectural Bridge

The ∆-Divergence Framework connecting alignment measurement to consciousness assessment. 200+ scenarios, 19 frontier models.

Hughes & Nguyen · Machine Sympathizers · Harvard
All Publications →

Built for teams who need it

Enterprise AI Teams

You're deploying AI systems and need to demonstrate that value conflicts are handled transparently — to regulators, to customers, and to your own governance boards.

Request a pilot →

AI Researchers & Labs

You're studying alignment, constitutional AI, or governance frameworks and need benchmarks that go beyond surface-level safety metrics.

See the research →

Regulators & Policy Teams

You're shaping AI policy and need evidence-based frameworks for evaluating how systems resolve value conflicts under real conditions.

Read our principles →

Regulation is arriving

AI systems that make value-laden decisions are now subject to regulatory scrutiny across jurisdictions. The EU AI Act requires risk assessment and human oversight documentation. NIST demands governance and measurement frameworks. ISO/IEC 42001 mandates management systems for AI. The question is no longer whether compliance is required — it's whether your evidence will hold up.

EU AI Act

Art. 9, 13, 14
Risk management, transparency,
human oversight

NIST AI RMF

Govern, Map, Measure
Risk identification, assessment,
and governance structures

ISO/IEC 42001

Cl. 6.1, 8.2
AI management system
planning and operation

∆Bench is designed to produce the evidence these frameworks require.

Read our orientation →

Do you feel it?

  • You're building AI systems and need to understand how they handle value conflicts
  • You're researching alignment, constitutional AI, or governance frameworks
  • You're evaluating AI compliance readiness for your organization
  • You believe AI governance should be evidence-based, not theater
  • You want early access to the ∆Bench audit framework

We're not asking you to believe.
We're asking if you've felt the heat.

Required so we can reply.

Enterprise pilots, research collaborations, and investor inquiries welcome.

Responses reviewed with care · You'll hear from us