The open-source coding agent you control
Many agents ship with hardcoded prompts and tools. Aether puts everything in config files you control, so every token in context is yours to mold.
Written in Rust, MIT licensed, batteries-included.
$ brew install contextbridge/tap/aether$ curl -fsSL aether-agent.io/install | sh$ cargo install aether-agent-cliRun in a TUI, Editor/IDE or Headless
You’ll be up and running in two commands
Run in a TUI
First run auto-scaffolds config files, tools, and a default agent prompt.
$ aether✓ Created .aether/settings.json✓ Created .aether/mcp.json✓ Created .aether/DEFAULT.md✓ Created AGENTS.mdLaunching TUI...Run in your editor
Expose over ACP for VS Code, JetBrains, or any editor with agent support.
$ aether acpRun headlessly
Run headless for scripts, CI, and automation.
$ aether headless "refactor auth"Use any LLM you want, cloud or local
You can bring your own too.
Cloud
Anthropic, OpenAI, OpenRouter, DeepSeek, Moonshot, Zai, LlamaCpp and more are supported out of the box.
{ "agents": [ { "name": "Plan", "description": "Plans implementation strategy", "model": "codex:gpt-5.4", "userInvocable": true, "reasoningEffort": "high", "prompts": ["PLAN.md"] }, { "name": "Build", "description": "Builds features and fixes bugs", "model": "zai:glm-5.1", "userInvocable": true, "prompts": ["BUILD.md"] }, { "name": "Fast", "description": "Quick tasks and lookups", "model": "openrouter:minimax/minimax-m2.7", "userInvocable": true, "prompts": ["FAST.md"] } ]}Local
Run local models with llama.cpp or Ollama.
{ "agents": [ ... { "name": "Local", "description": "Runs locally via llama.cpp", "model": "llamacpp:unsloth/Qwen3.5-9B-GGUF", "userInvocable": true, "prompts": ["LOCAL.md"] } ]}Alloy multiple models together
Combine the strengths of multiple models by round-robining between them each conversation turn. You can even mix cloud and local models together.
{ "agents": [ ... { "name": "Alloy", "description": "Mixes cloud and local models", "model": "zai:glm-5.1,llamacpp:unsloth/Qwen3.5-9B-GGUF", "userInvocable": true, "prompts": ["ALLOY.md"] } ]}Control every token in your system prompt
Start with a blank slate and add only what you need.
Start with a blank slate
Agents begin with no system prompt or tools.
$ aether show-prompt -a Build
---Prompt chars: 0Tool schema chars: 0Est. tokens: ~ 0MCP tools: 0Use any markdown files you want
Your system prompt is just an array of markdown files. Load AGENTS.md, CLAUDE.md, GEMINI.md...whatever and compose all of them together. @-file inclusion and shell command interpolation are supported out of the box
{ "agents": [ { "name": "Build", "description": "Builds features and fixes bugs", "model": "zai:glm-5.1", "userInvocable": true, "reasoningEffort": "high", "prompts": ["BUILD.md", "AGENTS.md"] } ]}# Build AgentYou are a staff+ level engineer....# AgentsUse the Explorer agent to search code....View your agent's system prompt with a single command
Including an est. token count
$ aether show-prompt -a Build# System PromptYou are an expert senior Rust engineer....
# AgentsUse the Explorer agent to search code....
---Prompt chars: 120Tool schema chars: 0Est. tokens: ~ 30MCP tools: 0Connect tools via MCP servers (local or remote)
Be minimal…or maximal. Give your agent a single tool (Bash) and point it at some CLIs + skill files. Or, go full batteries-included by adding a few lines to your .aether/mcp.json file.
Batteries included MCPs
Aether includes several built-in MCP servers you can optionally enable. They run in a custom in-memory transport.
coding— file I/O, grep, bash, and web fetchlsp— language-aware symbol lookup, diagnostics, and renameskills— reusable domain knowledge as slash commandssubagents— spawn parallel child agents with their own models and toolstasks— persistent task tracking across turnssurvey— allow the agent to ask you questions with structured outputplan— submit markdown plans for approval and feedback
{ "agents": [ { "name": "Build", "description": "Builds features and fixes bugs", "model": "zai:glm-5.1", "userInvocable": true, "prompts": ["BUILD.md", "AGENTS.md"], "mcps": [".aether/mcp.json"] } ]}{ "servers": { "coding": { "type": "in-memory" }, "lsp": { "type": "in-memory" }, "skills": { "type": "in-memory" }, "subagents": { "type": "in-memory" }, "tasks": { "type": "in-memory" }, "survey": { "type": "in-memory" }, "plan": { "type": "in-memory" } }}Any MCP server
Plug in any external MCP server via stdio, SSE, or streamable HTTP. Remote server OAuth credentials are stored securely in your keychain.
{ "servers": { ... "linear": { "type": "http", "url": "https://mcp.linear.app/mcp" } }}MCP tool proxy stops tools from eating your context
Aether can wrap external MCP servers in a built-in tool proxy, so agents discover nested tools on demand instead of loading every schema up front.
{ "servers": { "proxy": { "type": "in-memory", "input": { "servers": { "chrome-devtools": { "type": "stdio", "command": "npx", "args": ["-y", "chrome-devtools-mcp@latest"] }, "linear": { "type": "http", "url": "https://mcp.linear.app/mcp" } } } } }}Ready to build?
Use the full CLI or pick individual crates — aether-core, llm, mcp-servers, wisp, crucible — to compose your own agent.