Nonprofit standards organization · v0.1

Open standards for verifiable legal reasoning.

Legal AI systems should not merely answer. They should cite, prove, replay, and explain. legalproof.dev develops the open schemas, conformance tests, and certification programs that let any legal AI system prove its work.

ProofGraph · LR-001Draft
factf1factf2ruler1defeaterd1conclusionc1
conclusion.status = "defeasibly_supported"
The thesis

Any legal AI system can generate text. A LegalProof-compatible system must generate proof.

We do not own legal reasoning. We define how legal reasoning systems prove themselves — through a stable schema (ProofGraph), a thin-waist API, conformance tests, and a certification ladder vendors can adopt incrementally.

What we standardize

Six primitives. One auditable artifact.

ProofGraph schema

Claims, facts, authorities, defeaters, and conclusions as one auditable artifact.

Replay & attest

Versioned engine, knowledge graph, and rulebase, plus signed receipts.

Defeasible reasoning

Exceptions, overrides, rebuttals, and superior authority — modeled, not glossed over.

Status-at-time

Law-as-of-date is a first-class input, not an afterthought.

Conformance tests

Gold Proof Set, Temporal Drift, Jurisdiction Swap, Citation Replay.

Privacy by design

Redaction policies and signed digests. ZK-compatible roadmap.

Certification ladder

Adopt incrementally. Certify what you ship.

L0/Schema Compatible

Emits valid ProofGraph objects.

L1/Citation Replay Certified

Authorities resolve to stable references.

L2/Temporal-Safe

Passes law-as-of-date tests.

L3/Defeater-Aware

Models exceptions and superior authority.

L4/Attorney Review Ready

Output structured for review, not autonomous practice.

L5/Institutional Audit Ready

Replay, provenance, privacy, governance.

Make legal AI accountable to law, not just fluent in legal language.