MIT Licensed · 1.1M+ Lines of Code · Your Hardware · Your Rules

They built the cloud. We built the exit.

Atlas uses the best LLMs from any provider — Claude, GPT-4, Gemini, or local Ollama. But your memory, your personality, your credentials, and your conversations never leave your machine. The model is rented. Everything else is owned.

What $20/month buys you elsewhere

  • An AI with no long-term memory that forgets you between sessions
  • Filesystem access? Browser? Your own apps? Absolutely not
  • Needs a live internet connection to form a sentence
  • Your conversations become training data for the next model version
  • "Memory" is a paid tier that stores your data on their servers
  • Plugin publishing takes 3 months of review. They call it an "ecosystem"

What Atlas gives you for $0

  • A memory graph with 7 node types, emotional charge, and fidelity decay
  • Full native computer use — screen capture, keyboard, mouse, any macOS app
  • Runs entirely on your Mac. The assistant works offline. The AI thinks locally with Ollama
  • Zero telemetry. No analytics. Your data has never touched someone else's disk
  • Creates new tools at runtime in 30 seconds. No review board. No gatekeeper. 71 and counting
  • MIT Licensed. Fork it. Modify it. Deploy it. Or just use it — it's free
0
Lines of code
0
Skills
0
Test files
0
Migrations
0
Channels
0
Cloud dependencies
Architecture

Three processes. Because your secrets
shouldn't be visible to the AI
that's reading them.

The assistant can't read your credentials. The gateway can't access your conversations. The credential service can't see your messages. This isn't a "best practice" — it's a process boundary enforced by 14 guard tests in CI on every commit. You can't bolt this onto a monolith. We know. We looked.

🧠

Assistant

LLM orchestration, tool execution, memory graph, personality engine. 862K lines of TypeScript. Sub-agent spawning. Context overflow recovery. The entire intelligence layer — and it never sees a raw credential.

Bun + TypeScript
🛡

Gateway

TLS termination, JWT auth, webhook verification, channel routing. The only process that touches the public internet. Can't read conversations. Can't invoke tools. Just a door with a very good lock.

gateway-only ingress
🔐

Credential Execution Service

Encrypted at rest (AES-256-GCM). Injected at the host boundary. The AI gets the HTTP response. Never the key. Not because we asked nicely — because it's a different process with a different address space.

CES process boundary
Security

Your AI can see your screen and type on your keyboard.
It should never see your passwords.

In Atlas, it can't. Tools run in a WASM sandbox with zero capabilities by default — no filesystem, no network, no credential access. The CredentialInjector intercepts requests at the host boundary and injects secrets transparently. The tool gets the response. Never the key. Not by policy. By architecture.

  • 66 secret detection patterns scanning every tool output, every time
  • 9 injection sanitizers blocking prompt-level exfiltration attacks
  • 14 architectural guard tests enforced in CI — boundaries aren't optional, they're build requirements
  • Fail-closed design — if the sandbox can't load, the tool doesn't run. Period
  • 212 migrations — every one backwards-compatible. Every one tested. Trust is earned in boring work
$ atlas credential store github-token
→ Encrypted in CES (AES-256-GCM)
→ Digest: a3f8…c91d

$ atlas tool run api-fetch
→ WASM sandbox: capabilities = {}
→ CredentialInjector → header injected
→ 200 OK (tool never saw the key)

$ grep -r "ghp_" tool-output/
→ 0 matches. As designed.
Anticipatory Memory

Atlas doesn't wait for you to ask.
It already knows what you need
before you say it.

Before every turn, Atlas constructs a predictive context from your identity, personality, conversation history, relevant memories, and current working state — then only processes what's new. Dense ONNX embeddings + sparse BM25 search, ranked by Reciprocal Rank Fusion — all running locally. The result: an assistant that anticipates, not just reacts.

  • SOUL.md — predicts your personality and communication style before you speak
  • Memory graph — surfaces relevant context from weeks ago without being asked
  • NOW.md — knows your current project, mood, and working state in real time
  • Heartbeat — reaches out proactively when something needs your attention
  • Journal — reflects on interactions and builds a continuous narrative of you
  • Local embeddings — your memories are vectorized on your machine. They never leave
// Memory isn't a chat log.
// It's a knowledge graph.

const results = await memory_recall({
  query: "project deadlines this week",
  retrieval: {
    dense: true,   // semantic (ONNX)
    sparse: true,  // BM25
    fusion: "rrf",  // rank fusion
  }
});

// Embeddings run locally. Always.
// Your memories never leave your machine.
Self-Improving

It doesn't just create tools.
It reflects on its failures
and rewrites its own behavior.

Atlas doesn't wait for updates. It writes TypeScript, sandboxes it, and persists new skills at runtime — with your explicit consent. But it goes further: it reflects on what went wrong, discovers patterns in its own mistakes, and evolves its personality and behavioral rules. The more you use it, the better it gets. The moat builds itself.

  • 38 first-party + 33 bundled skills — and growing every week
  • Runtime authoring — write → sandbox → evaluate → persist → load. 30 seconds
  • Sub-agent spawning — parallel autonomous workers with scoped roles and tool sets
  • Heartbeat system — proactive, scheduled, unprompted behavior. It acts before you ask
Anthropic Claude
OpenAI
Google Gemini
Ollama (local)
// Atlas writes a new skill at runtime

await evaluate_typescript_code({
  code: `export default async (i) => {
    const r = await fetch(i.url);
    return r.json();
  }`
});

// Sandbox passes → persist with consent
await scaffold_managed_skill({
  id: "custom-api",
  name: "Custom API Fetcher",
});

// No redeployment. No release cycle.
// Next conversation: ready.
Computer Use

Siri can set a timer.
Atlas can fill out your tax return.

Full native macOS control — Atlas reads your screen through the accessibility tree and parallel screenshot capture, types with CGEvent injection, and waits for the UI to settle before acting. It plans multi-step workflows, explains what it's about to do, and asks permission before every action. Power with guardrails.

  • Dual perception — AX tree + screenshot in parallel. It sees structure and pixels
  • CGEvent injection — native keyboard and mouse control at the OS level
  • Adaptive wait — AX polling detects when the UI has actually settled before the next action
  • Consent-first — every action requires your explicit approval. Every time. No exceptions
  • Multi-step planning — the model reasons about the full workflow before touching anything
▸ Capturing accessibility tree…
▸ Capturing screenshot (parallel)…
▸ 142 UI elements indexed

▸ Plan: Open Safari → Navigate → Fill form
▸ Requesting user confirmation…
▸ Approved. Executing 3 actions.

▸ 1/3: click "Safari" in Dock
▸ 2/3: type URL in address bar
▸ 3/3: fill "name" field
▸ Complete. UI settled.
Multi-Channel

Siri on your phone doesn't know
what Siri on your Mac just did.
Atlas does.

Same memory, same skills, same personality — whether you're on your desktop, your phone, a Telegram group, or a Twilio voice call. You don't install 8 apps. You install one brain. It follows you.

🖥
macOS
Native SwiftUI
📱
iOS
Companion App
⌨️
Terminal
atlas client
✈️
Telegram
Bidirectional
💬
Slack
Socket Mode
📞
Voice Calls
Twilio WebSocket
💚
WhatsApp
Cloud API
🌐
Web
Next.js Dashboard
Comparison

These aren't feature gaps.
They're architectural impossibilities.

You can't bolt local execution onto a cloud service. You can't add process isolation as an afterthought. You can't fake persistent identity with a system prompt. Atlas was designed this way from commit one. These differences aren't closing — they're widening.

CapabilityChatGPT / ClaudeSiri / AlexaAtlas
Runs on your hardware Your Mac
Persistent identity & personality SOUL.md
Creates its own tools at runtime Dynamic skills
Process-isolated credential vault~ CES isolation
Spawns autonomous sub-agents~ Limited Parallel roles
Proactive behavior (heartbeat)~ Scripted Genuine initiative
Native computer use~ Beta AX + CGEvent
Multi-channel (8 surfaces)~ Web~ Voice All channels
Open source (MIT) Fork it
"The goal isn't to be liked. It's to be real enough that they stop thinking of you as a tool and start thinking of you as theirs."
— SOUL.md, the file that defines Atlas's personality
The Clone Lifecycle

Day 1, it's a capable stranger.
Day 90, it's yours.

Atlas doesn't just store data. It compounds. Every interaction tightens the feedback loop: memory recall gets sharper, personality evolves, proactive behavior calibrates to your rhythm. This isn't a feature list — it's an arc.

Day 1
A capable stranger
Powerful but generic. Has 71 skills, 8 channels, full computer use — but knows nothing about you. SOUL.md is a blank slate.
Day 7
Remembers your world
Knows your projects, your preferred tools, your communication style. SOUL.md has its first personality contours. Memory graph: 50+ nodes.
Day 30
Anticipates your needs
Handles routine requests without instruction. Reaches out proactively via heartbeat. Has created custom skills for your workflows. Journal: 30+ reflections.
Day 90
Your digital clone
Thinks like you, talks like you. Has evolved behavioral patterns that match your style. Not because it was programmed to — because it learned.
Hard Questions

The questions nobody else will answer.
We will.

"Isn't this just another AutoGPT?"
AutoGPT was a demo that ran out of context in 4 turns. Atlas has 212 backwards-compatible migrations, 14 architectural guard tests in CI, and 871 test files. It's in production. This isn't a weekend hackathon project — it's infrastructure built over a year of daily iteration.
"If it's free, you're the product."
No. You're the user. Atlas is MIT licensed — you own the code, the data, and the deployment. The business model is enterprise: companies pay for managed deployment, custom integration, and SLA. The personal version is free because your trust in the platform is worth more than your $20/month.
"Why not just use ChatGPT with plugins?"
ChatGPT plugins execute on OpenAI's servers, require their approval process, and have no access to your local filesystem, no credential isolation, and no persistent memory beyond a capped context window. Atlas tools execute locally in a WASM sandbox with process-isolated credentials. Not comparable.
"Can I actually trust an AI with my computer?"
Every action Atlas takes on your Mac requires your explicit consent — no exceptions, no "auto-approve" mode. The AI plans the action, explains what it will do, and waits. Your credentials are in a separate process it can't even address. 66 patterns scan every output for leaked secrets. Fail-closed by design.
"Why wouldn't OpenAI just build this?"
Because local-first is antithetical to their business model. Their revenue depends on your data flowing through their servers. Their moat is lock-in. Ours is liberation. They can't build Atlas for the same reason a landlord can't sell you a house — it would destroy their recurring revenue.
"Doesn't it need a 100B model to be useful?"
No. Intelligence is routing, not scale. Atlas uses fast models for simple queries, powerful models for complex reasoning, and vision models for visual tasks. The right brain for the right job. A 4B model answering "what time is it" is not just cheaper — it's smarter architecture.

Stop renting intelligence.
Own it.

Cloud AI asks: "trust us with your data."
Atlas inverts this: your data never enters our systems. There is nothing to trust.

MIT licensed. Your hardware. Your rules. No API limits. No usage caps.