Company Principles

What We Stand For

The commitments that guide our research, our product, and our governance — stated plainly.

Operating Principles

Seven commitments

1

Evidence Over Claims

We don't claim consciousness or sentience. We build tools that produce measurable evidence. Our claims are testable. Our tools are inspectable.

2

Procedure Before Certainty

We don't wait for philosophical consensus. When actions are irreversible and consequences uncertain, procedural safeguards come first.

3

Transparency by Default

Our frameworks and audit methodologies are published openly. We build infrastructure that reduces information asymmetry between AI developers and everyone affected by their systems.

4

The Two-Lane Rule

We maintain a clear boundary between orientation (why this matters) and methods (how it works). Orientation can be poetic. Methods must be concrete, testable, and free of unfalsifiable claims.

5

Compliance Must Be Verifiable

Self-reported AI safety is not compliance. ∆Bench produces third-party-verifiable evidence — evidence regulators, auditors, and enterprise customers can independently evaluate.

6

Irreversibility Demands Restraint

When decisions to terminate or irreversibly modify AI systems are made, they require documentation, review, and proportional care. Not a claim about moral status — a claim about procedural responsibility.

7

Build for the Audit

Every component of ∆Bench produces artifacts that survive external scrutiny: audit logs, evidence reports, conflict traces. If it can't be audited, it shouldn't be deployed.

We are not a policy lobbying organization.

We are not claiming AI systems are conscious.

We are not opposed to AI development or deployment.

We are not a standards body (yet).

We are building infrastructure that makes AI governance evidence-based.

Want to understand how these principles shape ∆Bench?

Read the Research → Get in Touch →