Alduin is an open-source AI agent orchestrator. A planner model decides what to do. Specialized executors do it. A cheap classifier routes the easy stuff away from the expensive path — with real token budgets, recursive sub-orchestration, and an MCP-compatible plugin system.
Most agents let the executing model decide whether to delegate, which means delegation quietly gets skipped — 50 to 80% of the time. Alduin bakes the split into the runtime.
The orchestrator can spawn child orchestrators on cheaper models for subtasks, with a configurable depth guard and per-session toggle.
A cheap model classifies every incoming message. Simple code, research, or calendar tasks skip planning entirely — complex work gets full orchestration.
Per-session, per-user, per-model daily limits with real tokenizers — Anthropic, OpenAI, Tiktoken. No character-count heuristics. Hard stops before the bill lands.
Mix Claude, GPT, DeepSeek, and local Ollama in one run. Pin versions explicitly. Fallback chains fire automatically on rate limits or timeouts.
Per-skill and per-tool permissions, structured audit trails for every plan, and an encrypted vault for tokens — never written to disk unencrypted.
Tools, skills, and connectors load through an MCP-compatible host. Ship a new skill as a YAML file and a function — no engine changes.
Cheap model tags the message. Simple tasks skip the orchestrator entirely.
Plans. Never executes. Emits structured JSON plans — and can recurse.
Zero conversation history. Run one step. Return a normalized result.
Channel-neutral payload → native messages, edits, streaming, files.
Clone the repo, point it at your models, and run your first orchestrated task in under five minutes.