← atlas.insym.io
atlas.insym.io

They built the cloud.
We built the exit.

Atlas uses the best LLMs from any provider. But your memory, your credentials, and your conversations never leave your machine. The model is rented. Everything else is owned.

Press to advance

Your AI is a rental apartment.

The landlord reads your mail, raises the rent whenever they want, and kicks you out if you miss a payment. They call it a "subscription." You call it a relationship. It's not. It's a lease with no equity.

What $20/month buys you elsewhere
  • An AI that forgets you between sessions
  • Filesystem? Browser? Your apps? Absolutely not
  • Needs internet to form a sentence
  • Your conversations become training data
  • "Memory" is a paid tier on their servers
  • 3 months to publish a plugin
What Atlas gives you for $0
  • Memory graph: 7 node types, emotional charge, decay
  • Full computer use — screen, keyboard, mouse, any app
  • Runs on your Mac. Thinks locally with Ollama
  • Zero telemetry. Your data never touched another disk
  • New tool in 30 seconds. No gatekeeper. 71 skills
  • MIT Licensed. Fork it. Deploy it. Own it
By the numbers
1.1M+

lines of open-source code. This isn't a prototype.

71
Skills
871
Test files
212
Migrations
8
Channels
0
Cloud deps
Architecture

Three processes. Because your secrets
shouldn't be visible to the AI
that's reading them.

🧠

Assistant

LLM orchestration, tools, memory graph, personality engine. 862K lines. Never sees a raw credential.

🛡

Gateway

TLS, JWT auth, webhook routing. The only internet-facing process. Can't read conversations. Just a door with a lock.

🔐

CES

Encrypted vault (AES-256-GCM). Injected at host boundary. Different process, different address space. By architecture.

Security

Your AI can see your screen
and type on your keyboard.
It should never see your passwords.

  • WASM sandbox: zero capabilities by default
  • CredentialInjector at the host boundary
  • 14 guard tests in CI — boundaries are build requirements
  • Fail-closed. If sandbox fails, tool doesn't run. Period
  • Not by policy. By architecture.
$ atlas tool run api-fetch
→ WASM sandbox: caps = {}
→ CredentialInjector intercepts
→ Header injected at host boundary
→ 200 OK (tool never saw the key)

$ grep -r "ghp_" tool-output/
→ 0 matches. As designed.
Anticipatory Memory

Atlas doesn't wait for you to ask.
It already knows what you need.

Retrieval

Real search,
not keyword matching.

  • Dense embeddings — semantic similarity (ONNX local)
  • Sparse lexical — BM25 keyword matching
  • Reciprocal Rank Fusion — combined ranking
  • Embeddings never leave your machine
// Hybrid retrieval

const results = await recall({
  query: "deadlines this week",
  dense: true,  // semantic
  sparse: true// BM25
  fusion: "rrf"  // rank
});

// Local SQLite + Qdrant. Always.
Self-Improving

It doesn't just create tools.
It reflects on its failures and rewrites its behavior.

Atlas writes TypeScript, sandboxes it, and persists new skills at runtime. But it goes further: it reflects on mistakes, discovers patterns, and evolves its own behavioral rules. The moat builds itself.

Anthropic
OpenAI
Gemini
Ollama
// Sandbox → evaluate → persist

await evaluate_typescript_code({
  code: `export default async (i) =>
    (await fetch(i.url)).json()`
});

await scaffold_managed_skill({
  id: "api-fetch",
});

// No redeployment. Next session: ready.
Parallel Intelligence

One brain. Many hands.

Atlas spawns autonomous sub-agents with scoped roles and tool sets. They work in parallel, notify the parent, and converge on results.

researcher
web + files + recall
coder
bash + write + edit
planner
read + search + analyze
Multi-Channel

Siri on your phone doesn't know
what Siri on your Mac just did.
Atlas does.

🖥
macOS
📱
iOS
⌨️
Terminal
✈️
Telegram
💬
Slack
📞
Voice
💚
WhatsApp
🌐
Web

Same memory. Same skills. One brain. It follows you everywhere.

Computer Use

Siri can set a timer.
Atlas can fill out your tax return.

  • Dual perception — AX tree + screenshot in parallel
  • CGEvent injection — native keyboard and mouse at OS level
  • Adaptive wait — AX polling until UI actually settles
  • Consent-first — every action, every time. No exceptions
▸ Capturing AX tree + screenshot…
▸ 142 UI elements indexed
▸ Requesting confirmation…
▸ Approved. 3 actions.

▸ 1/3: click "Safari" in Dock
▸ 2/3: type URL
▸ 3/3: fill "name" field
▸ Complete. UI settled.
Multi-Provider

Swap the brain.
Keep everything else.

Use modelIntent instead of hardcoded model IDs. Switch providers without touching a line of code.

Anthropic Claude
OpenAI
Google Gemini
Ollama (local)
latency-optimized
Fast
quality-optimized
Complex
vision-optimized
Visual
Comparison

These aren't feature gaps.
They're architectural impossibilities.

Capability
ChatGPT
Siri
Atlas
Runs locally
Persistent identity
Creates own tools
Credential isolation
~
Sub-agents
~
Proactive behavior
~
Computer use
~
Open source (MIT)
Hard Questions

The questions nobody else will answer.
We will.

"Isn't this just another AutoGPT?"
AutoGPT ran out of context in 4 turns. Atlas has 212 backwards-compatible migrations, 14 guard tests in CI, and 871 test files. This isn't a hackathon project — it's infrastructure built over a year of daily iteration.
"If it's free, you're the product."
No. Atlas is MIT licensed — you own the code, data, and deployment. Companies pay for managed deployment and SLA. The personal version is free because your trust is worth more than $20/month.
"Why not just use ChatGPT with plugins?"
ChatGPT plugins execute on OpenAI's servers with no local filesystem access, no credential isolation, and no persistent memory. Atlas tools run locally in a WASM sandbox with process-isolated credentials. Not comparable.
Hard Questions

Two more.
The ones they really don't want you to ask.

"Can I actually trust an AI with my computer?"
Every action requires your explicit consent — no exceptions, no auto-approve. Your credentials live in a separate process the AI can't address. 66 patterns scan every output for leaked secrets. Fail-closed by design.
"Why wouldn't OpenAI just build this?"
Because local-first is antithetical to their business model. Their revenue depends on your data flowing through their servers. They can't build Atlas for the same reason a landlord can't sell you a house — it would destroy their recurring revenue.
Agent Swarm

12 agents. Parallel worktrees.
One command.

/blitz "Add OAuth2 integration"

The Clone Lifecycle

Day 1, it's a capable stranger.
Day 90, it's yours.

Day 1 — A capable stranger
71 skills, 8 channels, full computer use — but knows nothing about you. SOUL.md is a blank slate.
Day 7 — Remembers your world
Knows your projects, preferred tools, communication style. SOUL.md has its first contours.
Day 30 — Anticipates your needs
Handles routine requests without instruction. Reaches out proactively. Has created custom skills.
Day 90 — Your digital clone
Thinks like you, talks like you. Evolved behavioral patterns that match your style. Not programmed — learned.
"The goal isn't to be liked. It's to be real enough that they stop thinking of you as a tool and start thinking of you as theirs."
— SOUL.md, the file that defines Atlas's personality

Stop renting intelligence.
Own it.

Cloud AI asks: "trust us with your data."
Atlas inverts this: your data never enters our systems. There is nothing to trust.

MIT licensed. Your hardware. Your rules. No API limits. No usage caps.

GitHub ↗ ← Back to site
1 / 20