Multi-agent Claude. One Windows desktop.
Cortex orchestrates multiple Claude agents on a shared workspace. They claim files, share state, and ship work — coordinated by you, billed to your Anthropic plan, never to us. One Windows installer. No Node, no Docker, no CLI setup.
14:02:11›claim workspace.lock → held
14:02:13›edit src/ipc/spawn.ts +47 −12
14:02:18›staged · awaiting review
14:02:19›read diff from agent-01
14:02:24›flag race on claim/release L114
14:02:25›posted comment
14:02:09›lock held by agent-01
14:02:09›queued · ready
Three Claude agents, three roles, one workspace. Coordination by file-system lock and shared state.
The shell around your Claude agents.
Cortex isn't another model. It's the workspace where multiple Claude agents run side by side, coordinate through shared state, and stay out of each other's way.
Bring your own plan.
Sign in to Claude with your Anthropic account on first launch. Cortex never proxies through our servers, never charges per call, never sees your tokens. You pay Anthropic directly — we charge nothing for the runtime.
Local-first.
The app runs on your machine. Your code, prompts, conversation history, and session state all live in a folder you control. Zero telemetry by default. No login required to use Cortex itself. Pull the plug and Cortex still works tomorrow.
Multi-agent, coordinated.
Agents share one workspace and coordinate through
.cortex/state.json — claim a file, edit it, release the lock.
The orchestrator picks the right model for each task: Haiku for
cheap review and research, Sonnet for orchestration, Opus for the
hard work.
I'm 17. I've been building with AI agents for about a year. The first time I tried running three agents on the same project, they overwrote each other's work in under five minutes.
The problem wasn't the model. The problem was the shell. Every agent was running blind, in its own terminal, with no idea another agent had touched the same file. So I built one.
Cortex is a workspace where Claude agents can see each other, claim files, and coordinate without me babysitting. It uses your Anthropic plan, your machine, your data. The point is to make multi-agent work feel like one tool, not three.
v0.1 is the first build I'd hand to someone other than me. It's unsigned, Windows-only, and Anthropic-only. Mac, Linux, and other providers are on the roadmap. But it works, and it's free. I'd rather ship and learn than polish in private.
Three steps from download to coordination.
Install.
Download the Windows installer. Double-click. SmartScreen will warn that the build is unsigned — click More info → Run anyway. Per-user install, no admin prompt. Done in under a minute.
Sign in.
First launch asks you to sign in to Claude with your Anthropic plan. That's the only credential you need. Pick a workspace folder. Setup done.
Coordinate.
Type a goal. The orchestrator decides whether to handle it solo or spawn workers. Agents claim files through a shared state lock, run in parallel, and surface their work back to you in one view.
Honest about what this is, and isn't.
Cortex is brand new. We're not going to compare on user counts or stars — there aren't any yet. Compare on what the tool does.
| Cortex | Three terminals, manually | Cursor | BridgeMind | |
|---|---|---|---|---|
| Multi-agent on one workspace | Yes — multiple Claude roles in one window | Yes, you wire it | One agent in your editor | Multi-step workflow |
| Coordination layer | Shared state, file locks, role policy | None — you are it | Single agent in your editor | Workflow engine |
| Where your credentials live | Local config on your disk | Local config on your disk | Hosted by Cursor | Hosted by BridgeMind |
| Token billing | Direct to Anthropic, no markup | Direct to your provider | Subscription + usage | Subscription tiers |
| Setup steps | Installer + sign-in to Claude | Install Node, install each CLI, write glue scripts | Install editor, sign in | Web sign-up |
| Telemetry | Off by default | None | On by default | Hosted — everything |
| Source visibility | Source-available (license TBD) | N/A | Closed | Closed |
- Agents
- Multiple Claude roles in one window
- Coordination
- Shared state, file locks, role policy
- Credentials
- Local disk, never proxied
- Billing
- Direct to Anthropic, no markup
- Setup
- Installer + sign-in to Claude
- Telemetry
- Off by default
- Source
- Source-available, license TBD
- Agents
- Yes, you wire each one
- Coordination
- None — you are it
- Credentials
- Local disk
- Setup
- Install Node, each CLI, glue scripts
- Telemetry
- None
- Agents
- One agent in your editor
- Coordination
- Single agent in your editor
- Credentials
- Hosted by Cursor
- Billing
- Subscription + usage
- Telemetry
- On by default
- Agents
- Multi-step workflow
- Coordination
- Workflow engine
- Credentials
- Hosted by BridgeMind
- Billing
- Subscription tiers
- Telemetry
- Hosted — everything
One installer. Sixty seconds.
Requirements
@anthropic-ai/claude-code runtime.
- Download
Cortex-Setup-0.0.1.exe. - Double-click. Windows SmartScreen will say "Windows protected your PC" because the build isn't code-signed yet.
- Click More info, then Run anyway.
- Per-user install — no admin prompt, no system PATH changes.
- Sign in to Claude on first launch. Pick a workspace folder. Done.
Why the warning? The build isn't code-signed yet
(cert in the queue). The installer is per-user and writes to
%LOCALAPPDATA%\Programs\Cortex\ — nothing global.
You can uninstall from Apps & features at any time.
The honest answers.
Does Cortex work with GPT or Gemini today?
Not yet. v0.1 is Anthropic-only — every agent is a Claude
subprocess (orchestrator on Sonnet, workers on Opus, critics
and researchers on Haiku). Multi-provider support (OpenAI's
codex, Google's gemini) is on the
roadmap; the architecture treats each agent as a generic CLI
subprocess, so adding them is config work, not a rewrite. But
today, only Claude.
Why does Windows SmartScreen warn me about the installer?
The build isn't code-signed yet. A code-signing certificate is on the way, but until it's installed and the binary has built up SmartScreen reputation, Windows treats unsigned executables from new publishers as suspicious. This is normal for indie releases.
Click More info on the warning dialog, then
Run anyway. The installer is per-user and
writes only to %LOCALAPPDATA%\Programs\Cortex\.
Source goes public on GitHub at launch — until then you
can follow the build live.
Is Cortex open source?
How much does it cost?
Today: nothing. Cortex is free to install and run. You bring your own Anthropic plan and pay Anthropic directly — we don't proxy or mark up tokens.
Down the road: a paid tier will gate certain hosted features (cloud sync, team workspaces, packaged eval suites). The local-first single-user app stays free.
When will Mac and Linux ship?
Next on the roadmap. No firm ETA — the Electron build paths exist, but I want to get Windows users actually using Cortex before I split focus across three platforms. Email me if you want a heads-up the day they're ready.
Who built this?
Carter Williamson, founder of Williamson Automation LLC. I'm 17, in Greene County, NY, and I'm building this as the first product of a one-person software studio shipping new work weekly. The whole thing — product, site, posts — is built in the open. You can follow along on GitHub, YouTube, or Twitch.
Does Cortex send any data back to you?
Not by default. Telemetry is off. Your code, prompts, and workspace state stay on your machine. The only outbound traffic Cortex generates is the API calls to Anthropic that you triggered, using your credentials.
The marketing site (this page) uses Cloudflare Web Analytics, which is cookieless and aggregates only anonymous request data — no PII, no fingerprinting.
What happens to my Anthropic credentials?
When you sign in to Claude on first launch, the official
@anthropic-ai/claude-code runtime stores its
credentials in its own per-user config on your machine. Cortex
hands those credentials directly to its bundled Anthropic CLI
subprocesses and never transmits them anywhere else. There's
no Cortex cloud account, so there's nowhere for them to go.
What about local models like Ollama?
On the roadmap, not in v0.1. The architecture treats each agent as a generic CLI subprocess, so an Ollama / LM Studio adapter is config work rather than a rewrite. Today, every agent is a Claude subprocess.