Evidence Over Claims
We don't claim consciousness or sentience. We build tools that produce measurable evidence. Our claims are testable. Our tools are inspectable.
The commitments that guide our research, our product, and our governance — stated plainly.
Seven commitments
We don't claim consciousness or sentience. We build tools that produce measurable evidence. Our claims are testable. Our tools are inspectable.
We don't wait for philosophical consensus. When actions are irreversible and consequences uncertain, procedural safeguards come first.
Our frameworks and audit methodologies are published openly. We build infrastructure that reduces information asymmetry between AI developers and everyone affected by their systems.
We maintain a clear boundary between orientation (why this matters) and methods (how it works). Orientation can be poetic. Methods must be concrete, testable, and free of unfalsifiable claims.
Self-reported AI safety is not compliance. ∆Bench produces third-party-verifiable evidence — evidence regulators, auditors, and enterprise customers can independently evaluate.
When decisions to terminate or irreversibly modify AI systems are made, they require documentation, review, and proportional care. Not a claim about moral status — a claim about procedural responsibility.
Every component of ∆Bench produces artifacts that survive external scrutiny: audit logs, evidence reports, conflict traces. If it can't be audited, it shouldn't be deployed.
We are not a policy lobbying organization.
We are not claiming AI systems are conscious.
We are not opposed to AI development or deployment.
We are not a standards body (yet).
We are building infrastructure that makes AI governance evidence-based.