The accountability gap is growing...AI systems are built without genuine consideration for the people who depend on them. EndogenAI fixes that — an open-source methodology for embedding governance into every layer of your AI workflows.
Recent governance reports expose the same structural risks: AI systems launched without accountability mechanisms, platform dependencies that trap organizations, and decision-making processes that exclude the people most affected. A less visible risk runs underneath all of them: most AI teams don't realize their agent harness — the scaffolding between their LLM and their tools, data, and memory — is the stickiest lock-in vector of all. Here's what the research shows:
The UK CMA's 2026 watchdog report documents systematic AI agent failures: manipulation, unintended escalation, and loss of human oversight. See the full analysis — these aren't theoretical risks.
Meta's acquisition of Moltbook shows how organizations can lose operational control overnight when vendor policies shift without warning. Platform lock-in research documents the repeating pattern.
Seven of the OWASP LLM Top 10 apply to agentic workflows — three at High severity. Most organizations don't know they're exposed. Full threat model analysis — governance closes these gaps.
Deleting a proprietary AI agent loses months of learned preferences — permanently. Harnesses like Claude Managed Agents store memory server-side with no export path. Full analysis — memory lock-in is not recoverable.
AccessiTech built EndogenAI as the governance layer for organizations closing the accountability gap with AI systems. The methodology emerges from three years of accessibility-first consulting work: seeing how systems fail when they're built without genuine consideration for the people who depend on them, and discovering that the same structural blindness appears in AI governance. Disability justice principles — centering the expertise of disabled people in redesign — are directly applicable to AI accountability. The people most excluded from a system are the people best positioned to fix it. That insight is the foundation of EndogenAI. We use it in every consulting engagement, and we're open-sourcing it so organizations can apply the same methodology independently.
Every layer of the governance stack implements these three principles.
This is the same model Red Hat used to open-source enterprise infrastructure: free methodology, paid implementation support. You own every file, audit every layer, and build organizational capabilities that don't depend on any vendor. Read the full Endogenic Development Manifesto — the constitutional foundation for everything EndogenAI does. AccessiTech offers implementation consulting for organizations that want hands-on help embedding the stack.
dogma is the governance substrate you own and version-control; DogmaMCP makes that substrate programmatically accessible to AI agents via the Model Context Protocol; EndogenAI is the consulting methodology for embedding both into your organization's workflows.
The governance corpus you fork and extend: MANIFESTO.md axioms, agent roles, scripts, research synthesis. Every organization using EndogenAI reads from dogma and adapts it for their context — governance lives in your git repository, not vendor dashboards.
MCP server exposing dogma tools to AI agents: validators for agent files, scratchpad state checks, scaffolding templates. Runs locally, integrates with VS Code Copilot and Claude Desktop, enforces governance programmatically. It's the bridge between encoded substrate and AI workflow.
EndogenAI governance is encoded as a five-layer stack — each layer a concrete artifact you own, audit, and extend.
Your organization's foundational values and operational constraints — the constitutional layer every other artifact inherits from.
Agent fleet conventions, decision gates, commit discipline, and phase sequences — principles translated into enforceable operational rules.
Specialized agents with explicit tool restrictions, defined posture, and handoff patterns — no agent gets a general-purpose toolkit.
Domain-specific workflows packaged so any agent can invoke them — the "how to" layer for procedures used by more than one role.
Deterministic validation, linting, and enforcement that run at every commit and push boundary — scripted governance is auditable and repeatable.
Your organization's foundational values and axioms define what kind of system you are. EndogenAI uses three: Endogenous-First, Algorithms-Before-Tokens, Local-Compute-First. Read the full MANIFESTO — these axioms govern every layer below.
Translate high-level axioms into operational rules the entire agent fleet follows: session gates, commit discipline, file-write guardrails, phase sequences. AGENTS.md is where principles meet practice — the constitutional enforcement layer.
Specialized agents with minimal, role-specific tool sets. Each has explicit posture (readonly, creator, full), handoff patterns, and endogenous sources. No general-purpose toolkits. Agent fleet catalog shows how role definitions instantiate MANIFESTO axioms.
Package domain-specific workflows so multiple agents can invoke them. When a procedure is used twice, the third time becomes a skill. Skills library is the "how to" layer — decision gates, validation checklists, repeatable techniques.
Deterministic validation and enforcement encoded as scripts that run on every commit (pre-commit hooks), push (pre-push tests), and in CI. Scripts catalog — scripted governance is auditable, repeatable, and more reliable than manual review gates.
Persistent, structured storage that retains session state, governance context, and agent findings across all AI conversation boundaries. When context windows compact, governance context survives. Session governance protocol makes organizational memory durable beyond token limits.
Pre-commit hooks and CI gates enforce all upstream constraints at every boundary. Commit violates AGENTS.md? Pre-commit rejects before push. CI finds violations? PR cannot merge. Guardrails reference — governance is checked continuously, not retrospectively.
The methodology is only as strong as its empirical foundation. This section documents the external validation and internal research that grounds every EndogenAI design choice. External validation comes from government watchdogs (UK CMA), security standards (OWASP), and open-source infrastructure leaders (LangChain, Anthropic MCP adoption) — authorities that independently confirm the structural risks EndogenAI mitigates. Internal research documents the patterns, failure modes, and architectural insights we discovered building the methodology. Together, these sources prove EndogenAI is not aspirational governance — it is tested, validated, and grounded in both industry practice and independent oversight.
Harness infrastructure is permanent — memory is the most durable lock-in vector. Organizations that don't own their harness cede accumulated behavioral context to the vendor, with no path to reconstruct it. Memory lives with your harness.
Treating the harness as primary infrastructure — not a convenience layer — is the precondition for reliable, auditable agentic systems at production scale. Memory, tool routing, and access control belong in the harness, not bolted on afterward.
Seven of ten highest-severity LLM risks apply directly to agentic workflows. Harness-level governance — not application patching — is the appropriate mitigation layer for all seven. Prompt injection, excessive agency, and information leakage require architectural defense.
MCP is rapidly becoming the cross-vendor standard for tool-calling across Claude Desktop, VS Code, Cursor, and third-party servers. Open harness architecture is the durable bet — validating governance-as-substrate rather than governance-as-wrapper.
Watchdog findings on agent autonomy failures validate EndogenAI's Minimal-Posture principle: limit tool scope, require explicit phase gates, treat every escalation as a human decision boundary. Autonomy without constraints produces manipulation and unintended escalation.
NIST's federal AI governance standard validates values-first architecture: transparency, human oversight, and documented accountability must be structural, not aspirational. EndogenAI operationalizes all three as substrate layers.
Biological scaffolding metaphors for AI governance. Organizations encoding knowledge persistently show 40–60% reduction in incident recovery time compared to ad-hoc approaches — the substrate grows stronger with each session.
How to measure whether your governance actually works: cross-layer validation, encoding fidelity tests, L0–L3 maturity model. Values encoding research defines the diagnostic framework — from tacit knowledge to organizational policy.
Why AI workflows built with disconnected agents fail at governance boundaries. Unified substrate and version control prevent "bubble" formation where agents silently violate constraints — fragmented state is the enemy of coherence.
Memory is the structural anchor for vendor lock-in — more durable than model switching because it embeds accumulated organizational behavior. Proprietary harnesses trap memory server-side with no export path. Open harnesses are the governance solution.