The Brain + Agents Model: How I Run My Entire Business From One Conversation
- 3 days ago
- 3 min read
Most AI agent frameworks start with code. Ours starts with a conversation.
I sit in one terminal. I talk to the LLM brain. When something needs to happen - a file edited, a service deployed, a batch of emails sent - we write a task file and dispatch an agent. The agent runs in the background. I keep talking to the brain about strategy, priorities, and what comes next.
This is the Brain + Agents model. And it changes everything about how you work with AI.
The Workflow
Here is how it actually works, every day:
I open a conversation with the LLM brain
We discuss what needs to happen - priorities, problems, opportunities
When we identify a task, the brain writes it to a task file
The brain dispatches a background agent: `claude -p "Read /root/tasks/task.md and execute." --max-turns 25`
The agent runs independently, following the same guardrails
The brain comes back to me immediately: "Task dispatched. What's next?"
When the agent finishes, we check its work together
I never wait. The brain never disappears into a 30-minute coding session. We stay in conversation while work happens in the background.
Why This Is Different
Every other agent framework I have seen treats the AI as an executor. You give it a task, it runs, you wait for results.
The Brain + Agents model treats the AI as a partner. The brain is strategic. It sees the whole system. It knows what happened yesterday, what is running right now, and what matters most. The agents are workers. They handle specific tasks and report back.
This separation matters because:
Strategy does not stop for execution. While an agent is building a feature, the brain and I are planning the next three moves.
Context stays in one place. The brain maintains continuity across sessions through handoff files. Agents are disposable - they do one job and exit.
Guardrails apply everywhere. Every agent runs `preflight.sh` before editing files. The same UNTOUCHABLE list protects the system whether it is the brain or a background worker making changes.
Parallel work is natural. Three agents can run simultaneously while the brain monitors all of them.
dispatch_to_llm: The Product Feature
We turned this workflow into an MCP tool. Any MCP-compatible client can call `dispatch_to_llm` to spawn a background agent for heavy tasks.
{
"task": "Audit all API endpoints and document response schemas",
"max_turns": 15
}The tool spawns the agent, returns the process ID and log file path, and the client keeps working. Max 2 concurrent dispatches, with a 500MB+ RAM requirement to prevent the VPS from running out of memory.
No other AI governance framework has this. The competitors focus on rules and enforcement. We added delegation as a first-class feature because that is how real work gets done.
The Vision
Right now, this runs on one VPS for one person. But the pattern scales.
Imagine every knowledge worker operating this way: one conversation with a strategic AI partner, background agents handling execution, guardrails preventing mistakes, audit trails proving compliance.
The infrastructure costs $12/month. The LLM subscription is $300/month. The value is in the model - brain for strategy, agents for execution, nervous system for governance.
We are building the management layer for AI agents. Not another chatbot. Not another framework. A way for humans and AI to work together where both sides do what they are best at.
The brain thinks. The agents work. The nervous system keeps everyone honest.
Built by Arthur Palyan at Levels of Self LLC.
GitHub | Live Demo | npm


Comments