Evidentiary Standard
Reasoning is not an afterthought. It is the primary forensic artifact. This methodology defines the minimum content required for AI decision records to satisfy evidentiary scrutiny in high-stakes domains.
Capture at Decision Time
Reasoning must be recorded at the exact instant of inference, not reconstructed afterward. Post-hoc explanations are opinions. Contemporaneous traces are evidence.
{
"reasoning_record": {
"timestamp": "2026-03-15T09:23:47.620Z",
"model_version": "v2.3.1",
"input_hash": "blake3:...",
"reasoning_method": "chain-of-thought",
"severity_score": 4
}
}
Approved Methods
The methodology approves three reasoning capture methods that satisfy specificity requirements under evidentiary standards:
- Chain-of-thought: Token-level reasoning path with intermediate steps
- SHAP: Feature attribution with directional contribution values
- LIME: Local interpretable model-agnostic explanations
Automated Quality
Human review alone is insufficient at scale. The methodology mandates automated quality checks and severity scoring as structural requirements, not optional enhancements.
Quality invariant: VERIFY_REASONING → SCORE_SEVERITY → COMMIT_RECORD
Automated checks run before human review. The machine validates the machine.
Verifiable Integrity
Long-term evidentiary value requires more than tamper-evidence. It requires cryptographic binding to the decision record and public deposit for institutional redundancy.
⚠ Companion Standard Deprecated
ADR Specification v0.1, previously referenced as the companion container standard to this methodology, is archived, deprecated, and obsolete as of March 2026. Current implementations should reference AgDR-Phoenix v1.8 for the active container specification. This methodology remains authoritative for evidentiary content requirements.
Public Deposit
Deposited under CC BY 4.0. The methodology is a permanent public record, not a proprietary specification. Archived March 2026.
{
"deposit": {
"archive_org": "reasoning-methodology-v1.0",
"date": "2026-03",
"license": "CC-BY-4.0",
"steward": "Genesis Glass Foundation / Fondation Genèse Cristal"
}
}
Legal Alignment
- Canada Evidence Act s.31.1: Contemporaneous capture satisfies the "reliability of the electronic records system" test by construction
- CBCA s.122: Specific, verifiable reasoning encodes Director duty of care into the decision record itself
- EU AI Act (High-Risk): Article 14 human oversight requirements met through immutable reasoning preservation
- Reg BI (US): Best interest obligation documentation through explainable, severity-scored records
Distinction from Logging
Traditional ML logging captures what happened. Reasoning Capture Methodology captures why it happened—with specificity, at decision time, with automated quality verification.
Structural monitoring verifies system health. Reasoning Capture verifies decision defensibility. Both are necessary. Neither substitutes for the other.
Contemporaneous
Captured at inference time, not reconstructed. The reasoning trace is sealed before the decision exits the kernel.
Specific
Chain-of-thought, SHAP, or LIME. No black-box summaries. Every weight and branch is accountable.
Verifiable
Automated quality checks and severity scoring. The machine validates its own reasoning before human review.