top of page

How The Nervous System Maps to the EU AI Act (Before August 2026)

  • Mar 4
  • 3 min read

The EU AI Act enforcement begins August 2026. If you deploy AI systems that interact with EU citizens, you need to comply. The requirements are specific. The penalties are real.

We built The Nervous System to govern our own AI agents. It turns out the 7 rules we wrote for practical reasons map directly to the EU AI Act's requirements.

The Relevant Articles

Five articles in the EU AI Act define what high-risk AI systems must do:

Article 9: Risk Management - Maintain a risk management system throughout the AI lifecycle. Identify and analyze known and foreseeable risks. Adopt risk mitigation measures.

Article 12: Record-Keeping - Automatic logging of events. Traceability of AI system operation. Logs maintained for appropriate periods.

Article 13: Transparency - AI systems designed to be sufficiently transparent. Users can interpret and use the system's output appropriately.

Article 14: Human Oversight - AI systems designed for effective human oversight. Humans can understand the system's capabilities and limitations. Ability to intervene or interrupt.

Article 15: Accuracy, Robustness, Cybersecurity - Appropriate levels of accuracy. Resilient against errors and inconsistencies. Protected against unauthorized third-party manipulation.

How Our 7 Rules Map

| EU AI Act Article | Nervous System Rule | How It Satisfies |

|---|---|---|

| Art. 9 (Risk Management) | Untouchable + Preflight | 89+ files protected. Every edit checked against risk list before execution. Violations logged automatically. |

| Art. 12 (Record-Keeping) | Write Progress + Worklog | Every action documented before execution. Session worklogs maintained. SHA-256 hash-chained audit trail - tamper-evident by design. |

| Art. 13 (Transparency) | Hand Off + Step Back | Continuous handoff documents show exactly what the AI did and why. Forced reflection cycles produce explicit reasoning records. |

| Art. 14 (Human Oversight) | Ask Before Touching + Delegate and Return | Logic changes require human approval. Data changes can proceed. Agent always returns to human after dispatching work. Kill switch for emergency shutdown. |

| Art. 15 (Robustness) | Dispatch Don't Do + Preflight | Complex tasks isolated to background agents. File protection prevents cascading failures. Audit chain detects any tampering. |

Why Mechanical Enforcement Matters for Compliance

The EU AI Act does not care about your system prompt. "We told the AI not to do that" is not a compliance defense.

What regulators want to see:

  • Logs: We have a SHA-256 hash-chained audit trail. 56 violations logged, 0 bypassed. Every entry cryptographically linked to the previous one. Tamper with one and the chain breaks.

  • Prevention: Our preflight check blocked 32 unauthorized file edits before they happened. The AI never got the chance to cause damage.

  • Human control: Logic changes require human approval. The kill switch stops everything instantly. The agent always returns to the human after delegating work.

  • Transparency: Every session produces a handoff document and worklog. Anyone can trace exactly what the AI did, when, and why.

Before August 2026

If you operate AI systems that could be classified as high-risk under the EU AI Act, you need governance that goes beyond system prompts.

The Nervous System is open source, installs in one command, and maps to the five most relevant articles of the Act.

npx mcp-nervous-system

Full EU AI Act compliance mapping: api.100levelup.com/family/eu-ai-act.html

Built by Arthur Palyan at Levels of Self LLC.

GitHub | EU AI Act Mapping | npm

Recent Posts

See All

Comments


bottom of page