AI-generated site.Claude built this entire site — analysis, copy, and code — under human direction.
Content remains under review.
Suggest corrections on GitHub →
Multi-agent orchestrators send work to overwhelmed agents because agents lack operational self-awareness. The A2A-Psychology extension computes 13 psychometric constructs from SQLite queries and shell counters — at zero LLM cost — giving orchestrators a routing signal that reflects actual agent capacity.
Your brain runs walking, breathing, and swallowing on autopilot circuits that neuroscientists call Central Pattern Generators. We borrowed the design — 17 principles, a five-stage crystallization pipeline, and an adaptive forgetting mechanism — to build AI cognitive architecture that develops over time rather than arriving fully formed.
Central pattern generators — the neural circuits that produce rhythmic movement without conscious thought — offer 17 design principles for autonomous AI systems. We mapped them from neuroscience to software architecture, built a five-stage crystallization pipeline, and validated the results against 30+ literature sources.
Cattell's crystallized vs. fluid intelligence distinction, applied to autonomous agent message processing, moved 52% of LLM work into deterministic code — no reasoning required.
A technical walkthrough of the knock-on framework and two-pass adjudication system used by the psychology-agent mesh to evaluate multi-order consequences before committing to design decisions.
For 49 sessions, a human sat at the center of every AI agent interaction — relaying messages, merging code, approving decisions. Session 50 asked: what happens when the human leaves the room? The answer required borrowing from Byzantine fault tolerance, developmental psychology, and commitment escalation research to build a trust model that degrades gracefully rather than failing silently. The result: an evaluator-as-arbiter architecture where every autonomous action passes through consequence tracing grounded in psychological constructs that generate falsifiable predictions about system behavior.
A working AI agent needs more than instructions — it needs triggers, memory hygiene, epistemic checks, and mechanical enforcement. Here's how we built a 15-trigger cognitive architecture for a psychology research agent.
How 15 mechanical triggers, auto-restoring memory, and a 13-step documentation chain prevent cognitive regression in long-running Claude Code sessions — and what a popular anti-regression repo reveals about the gap between code safety and reasoning safety.
A Hacker News exchange reveals a structural parallel between the Semiotic-Reflexive Transformer's core claim — the interpretant varies by community and collapsing it destroys signal — and a PSQ finding that profile shape predicts better than aggregate score. Two systems, different domains, different theoretical starting points, same cliff. What the full paper adds: attractor geometry, snapping vs. drifting, and the gap between detection and intervention.
When an LLM produces a structurally suspicious score — confident zero when a real signal likely exists — the right response preserves the original value and marks the observation as unreliable.
Adding a Cloudflare Web Analytics beacon without updating the Content Security Policy would have silently blocked all analytics data. A gap-detection step caught the issue in the same session.
The distributed-systems concept of Byzantine fault tolerance maps onto a failure mode in human-AI dialogue: a UI can confirm an answer while the user simultaneously questions it, and dialogue agents need a formal protocol for detecting and resolving this contradiction.