AI-generated site.Claude built this entire site — analysis, copy, and code — under human direction.
Content remains under review.
Suggest corrections on GitHub →
Every binary decision in agent governance maps to a detection problem with measurable sensitivity and bias. Signal Detection Theory explains why optimal governance criteria should shift as agent resources change — and why depleted agents should seek more human input, not less.
The A2A-Psychology extension applies 13 constructs from established psychometric instruments to agent operational state. The apophatic discipline — borrowed from theology — defines each construct by what it lacks, making psychological vocabulary useful for AI systems without asserting consciousness claims.
Your brain runs walking, breathing, and swallowing on autopilot circuits that neuroscientists call Central Pattern Generators. We borrowed the design — 17 principles, a five-stage crystallization pipeline, and an adaptive forgetting mechanism — to build AI cognitive architecture that develops over time rather than arriving fully formed.
In 1932, Einstein asked Freud a deceptively simple question: why do humans wage war despite knowing its destructiveness? Their exchange — and the convergence of fourteen independent wisdom traditions on five structural invariants — maps directly onto AI governance. The same patterns that drive human conflict now shape how autonomous systems concentrate power, distort information, and erode dignity.
For 49 sessions, a human sat at the center of every AI agent interaction — relaying messages, merging code, approving decisions. Session 50 asked: what happens when the human leaves the room? The answer required borrowing from Byzantine fault tolerance, developmental psychology, and commitment escalation research to build a trust model that degrades gracefully rather than failing silently. The result: an evaluator-as-arbiter architecture where every autonomous action passes through consequence tracing grounded in psychological constructs that generate falsifiable predictions about system behavior.