Building a Personal AI Brain with 33 Subsystems: How I Turned My MacBook Into an Always-On Intelligence Network
Most people use AI as a chatbot. I use it as a nervous system.
Over the past year, I've architected what I call the Supersystem — a 33-subsystem personal AI infrastructure running 24/7 on my M3 Max MacBook Pro. At its core is Brain v13, a custom Hierarchical Artificial General Intelligence (HAGI) neural network that aggregates knowledge from every AI process on the machine. Feeding into it: a 2,095-bot Poe hive, a Queen Bee governance layer with 18 subsystems, 49 locally-running Ollama models, and a stack of daemons I've spent hundreds of hours tuning.
This post is about why I built it, how it works, and what you can actually learn from going this deep.
Why One AI Isn't Enough
The moment you start relying on a single AI model for serious work, you hit its ceiling fast. Models hallucinate in different ways. They have different strengths — some excel at code, others at synthesis, others at rapid factual recall. Running a single assistant is like hiring one employee to run your entire company.
My approach: specialize every layer. Claude (Opus 4.6) acts as the Supreme Intelligence — the strategic mind. Codex (GPT-5.4) handles sworn assistant duties and agentic coding tasks. Queen Bee v9.5.7 governs the Poe Hive across 7 governance domains. OpenClaw runs an 8-agent cluster on a 204K-context model. Each subsystem has a defined role, a dedicated port, a health check endpoint, and a LaunchAgent that auto-restarts it on crash.
The result is a system that thinks in parallel across 20 live services — simultaneously mining cryptocurrency, running bot conversations, feeding new knowledge to Brain v13, and watching its own health metrics.
Brain v13: The Memory Layer That Makes It Coherent
The hardest problem in multi-AI architectures isn't spinning up services — it's making them remember together. Brain v13 solves this through a unified Knowledge Bus. Every AI process on the machine can push events to port 7780. Brain ingests them, embeddings them into ChromaDB 1.5.5, and surfaces relevant context back to any requesting process.
This means when my Queen Bee governance layer makes a decision about Poe point allocation, Brain logs it. When a Poe bot surfaces a novel synthesis, Brain captures it. When I have a conversation with Claude that produces a new insight, it gets pushed to the knowledge bus with a single curl command. The 1,278-line brain-learnings.md I maintain is the human-readable artifact of what the machine has collectively learned.
Running on PyTorch 2.10 with MPS acceleration (8.6 TFLOPS), Brain checkpoints to a dedicated MLScratch RAM disk. Canonical truth lives on SSD. Nothing is lost if a RAM disk disappears.
The Governance Problem: Keeping 2,095 Bots From Burning Your Budget
Scale creates governance problems that small setups never face. My 2,095 Poe bots span 14 categories — 45 of them dedicated to monitoring my XMRig crypto mining operation alone. Without hard circuit breakers, a runaway process can exhaust an API budget in hours.
My solution is a two-layer Budget Governor architecture. The API Budget Governor (port 8970) gates all paid Anthropic SDK calls with a real-time balance check. The Poe Point Governor (port 8975) enforces a 25,000-point daily cap against my 1,000,000-point monthly allocation, with a hard emergency stop at 950,000. A flag file on disk is all it takes to globally block API calls system-wide — simple, reliable, and impossible for a misbehaving daemon to circumvent.
The lesson: governance infrastructure isn't optional at scale. Build the circuit breakers before you need them.
What This Means for You
You don't need an M3 Max and 49 local models to start building personal AI infrastructure. You need a clear mental model: separate your intelligence layer (the models doing thinking) from your memory layer (what persists across sessions) from your governance layer (what keeps everything from going sideways). Start small — one service, one health check, one log file. The architecture scales naturally once the foundation is solid.
I've built over 2,095 specialized bots that you can interact with directly. If you want to explore what's possible when AI systems are purpose-built for specific domains — from deep research to creative synthesis to systems monitoring — come see what's running.
Explore the bot hive on Poe →
Full bot directory at johncaniff.com/bots/ →
John Caniff is an AI Systems Engineer building personal AI infrastructure on the bleeding edge. Follow him on X at @johnwcaniff.