top of page

The 80% Rule: Why Your AI Should Ask Before It Acts

  • Mar 4
  • 2 min read

Here is the problem with every AI tool right now: models understand about 60% of what you ask, then run hard on that 60%.

The missing 40% is where people lose patience. Where they close the tab. Where they decide "AI is not ready yet." Not because the model is stupid, but because it never stopped to ask what you actually meant.

We built two tools to fix this. They ship free with the Nervous System MCP server, v1.3.0.

parse_user_intent: Stop guessing, start listening

This tool breaks your request into individual deliverables and scores its confidence from 0 to 100.

Below 80% confidence, it stops and asks you to clarify. Above 80%, it executes. Like a good coach who listens before giving advice.

Here is a real example. A user sent this:

"Test this theory, verify it, audit it, then apply it here and in arthur.html"

Without the parser, the model caught 1 of 5 deliverables. It grabbed "test this theory" and ran with it. The other four got lost.

With parse_user_intent, it caught all 5:

  1. Test the theory

  1. Verify the theory

  1. Audit the theory

  1. Apply it to the current context

  1. Apply it to arthur.html

The confidence score came back at 62%. Below threshold. So instead of guessing, it asked: "Which theory? What does 'apply' mean here - replace, append, or merge?"

That single clarification saved 3 wasted tool calls and about 40 seconds of the user waiting for the wrong output.

classify_task_complexity: Right model for the right job

Not every task needs the biggest model. A pm2 restart does not need Opus. A file rename does not need Sonnet.

This tool scores tasks across 6 dimensions and assigns one of 3 tiers:

  • Tier 1 (Haiku) - Simple ops: restarts, status checks, file reads. 90% cost savings vs Opus.

  • Tier 2 (Sonnet) - Moderate work: file edits, config changes, targeted fixes. 60% cost savings vs Opus.

  • Tier 3 (Opus) - Complex reasoning: system audits, multi-file refactors, architecture decisions. Full power, justified spend.

We run 22 AI agents on a $12/month VPS. Without task routing, our API bill would be 3x what it is. Every request hitting Opus "just in case" is money you are burning for no reason.

How this fits with Auto Mode

Claude's new Auto mode (March 12) decides what Claude CAN do - which tools to call, how to chain them. The Nervous System governs HOW it behaves while doing it. They are complementary layers, not competitors.

Auto mode picks the actions. The Nervous System makes sure the model understands the request before acting, and routes to the right model tier after.

Try it now

Both tools ship in the free tier of the Nervous System MCP server.

  • GitHub: github.com/levelsofself/mcp-nervous-system

  • npm: npx mcp-nervous-system

  • Live endpoint: api.100levelup.com/mcp-ns/mcp

Install it. Send a vague request. Watch it ask you what you actually meant instead of guessing.

That is the 80% rule. If the model is not 80% sure what you want, it should ask - not assume.

Arthur Palyan builds AI infrastructure at levelsofself.com. The Nervous System governs 22 agents on a single VPS for $12/month.

 
 
 

Comments


bottom of page