Atlas uses the best LLMs from any provider — Claude, GPT-4, Gemini, or local Ollama. But your memory, your personality, your credentials, and your conversations never leave your machine. The model is rented. Everything else is owned.
The assistant can't read your credentials. The gateway can't access your conversations. The credential service can't see your messages. This isn't a "best practice" — it's a process boundary enforced by 14 guard tests in CI on every commit. You can't bolt this onto a monolith. We know. We looked.
LLM orchestration, tool execution, memory graph, personality engine. 862K lines of TypeScript. Sub-agent spawning. Context overflow recovery. The entire intelligence layer — and it never sees a raw credential.
Bun + TypeScriptTLS termination, JWT auth, webhook verification, channel routing. The only process that touches the public internet. Can't read conversations. Can't invoke tools. Just a door with a very good lock.
gateway-only ingressEncrypted at rest (AES-256-GCM). Injected at the host boundary. The AI gets the HTTP response. Never the key. Not because we asked nicely — because it's a different process with a different address space.
CES process boundaryIn Atlas, it can't. Tools run in a WASM sandbox with zero capabilities by default — no filesystem, no network, no credential access. The CredentialInjector intercepts requests at the host boundary and injects secrets transparently. The tool gets the response. Never the key. Not by policy. By architecture.
Before every turn, Atlas constructs a predictive context from your identity, personality, conversation history, relevant memories, and current working state — then only processes what's new. Dense ONNX embeddings + sparse BM25 search, ranked by Reciprocal Rank Fusion — all running locally. The result: an assistant that anticipates, not just reacts.
Atlas doesn't wait for updates. It writes TypeScript, sandboxes it, and persists new skills at runtime — with your explicit consent. But it goes further: it reflects on what went wrong, discovers patterns in its own mistakes, and evolves its personality and behavioral rules. The more you use it, the better it gets. The moat builds itself.
Full native macOS control — Atlas reads your screen through the accessibility tree and parallel screenshot capture, types with CGEvent injection, and waits for the UI to settle before acting. It plans multi-step workflows, explains what it's about to do, and asks permission before every action. Power with guardrails.
Same memory, same skills, same personality — whether you're on your desktop, your phone, a Telegram group, or a Twilio voice call. You don't install 8 apps. You install one brain. It follows you.
You can't bolt local execution onto a cloud service. You can't add process isolation as an afterthought. You can't fake persistent identity with a system prompt. Atlas was designed this way from commit one. These differences aren't closing — they're widening.
| Capability | ChatGPT / Claude | Siri / Alexa | Atlas |
|---|---|---|---|
| Runs on your hardware | ✗ | ✗ | ✓ Your Mac |
| Persistent identity & personality | ✗ | ✗ | ✓ SOUL.md |
| Creates its own tools at runtime | ✗ | ✗ | ✓ Dynamic skills |
| Process-isolated credential vault | ✗ | ~ | ✓ CES isolation |
| Spawns autonomous sub-agents | ~ Limited | ✗ | ✓ Parallel roles |
| Proactive behavior (heartbeat) | ✗ | ~ Scripted | ✓ Genuine initiative |
| Native computer use | ~ Beta | ✗ | ✓ AX + CGEvent |
| Multi-channel (8 surfaces) | ~ Web | ~ Voice | ✓ All channels |
| Open source (MIT) | ✗ | ✗ | ✓ Fork it |
Atlas doesn't just store data. It compounds. Every interaction tightens the feedback loop: memory recall gets sharper, personality evolves, proactive behavior calibrates to your rhythm. This isn't a feature list — it's an arc.
Cloud AI asks: "trust us with your data."
Atlas inverts this: your data never enters our systems. There is nothing to trust.
MIT licensed. Your hardware. Your rules. No API limits. No usage caps.