The Problem
I wanted an AI assistant that could manage my email, calendar, and files — but I also wanted one that could read the internet, monitor journals, and process untrusted content. Putting both capabilities in the same agent is asking for trouble. A crafted email or a prompt injection buried in a journal article could reach an agent with access to everything. The more capable you make it, the worse this gets.
The Solution — Two Agents, Two Machines
So I split the job in two. A Personal Assistant runs on a local machine behind my home network — it handles the trusted stuff: email triage, calendar, task scheduling, daily briefings, and a persistent memory system. A Communications Director runs in a disposable cloud container — it handles everything that touches the outside world: monitoring scientific literature, scoring papers, coordinating meetings, and publishing a weekly research digest. Papers surfaced by the digest feed into a personal science vault I use for grant writing.
There is no direct network path between the two agents — not even a VPN. Different hardware, different identities, different API keys, different capabilities. They can't talk to each other, and that's the point.
Personal Assistant
Local machine — trusted
- Email triage
- Calendar management
- Task scheduling
- Memory & recall
- Daily briefings
- File access
Communications Director
Cloud container — untrusted inputs
- Literature search
- Paper scoring
- Meeting coordination
- Research digests
- Web research
- Scale-to-zero
Staging Layer
Human review before any data crosses
Security Model
The separation is hardware-enforced. If the cloud agent is fully compromised — prompt injection, API key leaked, whatever — the attacker gets one container and one API key. That's it. They can't reach my local machine, my files, or my communication channels. The blast radius is one disposable container.
The staging layer is the only resource the two agents share, and everything that crosses it goes through human review first. This means the system can tolerate a compromised cloud agent without any risk to personal data. It's not just defense in depth — it's defense by architecture.
If something looks wrong with the cloud agent, the response is simple: stop the container, wipe it, redeploy from a clean image. Five minutes, no data loss, no forensics needed. Try doing that with a single monolithic assistant that has access to everything.
Key Design Decisions
- Routing is classification. Personal messages arrive at one channel, machine-generated data at another. No parsing needed to determine intent.
- No direct network path between agents. The only shared resource is the staging layer, which requires human approval.
- The Personal Assistant never touches untrusted external input. It only processes data from trusted sources and human-approved transfers.
- The Communications Director never touches personal data. It operates exclusively on public scientific literature and external communications.
- Built on OpenClaw, an open-source agent framework that provides the scaffolding for both agents.