Open source · Go library · YAML DSL · MIT License
Vega is a fault-tolerant orchestration framework for AI agents. Erlang-style supervision, 850+ integrations via MCP, and a Slack-like dashboard where you watch your agents collaborate.
$ brew install everydev1618/tap/vega
$ vega serve
▶ Dashboard ready at http://localhost:3001
850+
App integrations via Composio
Go + YAML
Library or no-code DSL
SQLite
Zero-config persistence
MCP
Model Context Protocol native
Features
From a single agent to a full company of AI workers. Vega handles fault tolerance, team coordination, integrations, and observability.
Tell Iris what you need. She delegates to Hera, who builds agents, teams, and channels on the fly. No code required.
Connect Composio, Slack, GitHub, Gmail, and more via MCP. Add new services at runtime through chat.
Erlang-style supervision trees. Processes restart automatically. Errors are classified and retried with backoff.
Agents organize into teams with leads and members. Team channels provide transparent collaboration the user can watch.
Define multi-agent workflows in YAML with if/else, for-each, parallel, and try/catch. No programming required.
Token-by-token SSE streaming with live tool call indicators. Built-in REST API and web dashboard.
How it works
No YAML required. Just talk to Iris and she'll build what you need.
Iris is your chief of staff. Tell her what you need done. She understands your goals and figures out who should do the work.
Need a new capability? Hera creates agents on the fly with the right tools, personality, and team structure. No restart needed.
Team leads delegate to members. Agents post updates to channels. You watch it happen in real time through the dashboard.
Agents, conversations, memory, and MCP connections survive restarts. Your AI workforce is always ready.
Quick start
# Just talk
You: I need a content team
You: that can write blog posts
You: and post to our Slack
Iris: On it. I'll have Hera
build that for you.
Hera: Done, love. Meet your
team: Sofia (writer) and
Marcus (editor). Check
#content channel.
No code. No config files. Just describe what you need.
name: ContentTeam
settings:
mcp:
servers:
- name: composio
agents:
writer:
model: claude-sonnet-4-20250514
system: |
You write engaging blog posts.
tools:
- composio__slack_send
Declarative. Version-controlled. Reproducible.
import (
"github.com/everydev1618/govega"
"github.com/everydev1618/govega/llm"
)
orch := vega.NewOrchestrator(
vega.WithLLM(llm.NewAnthropic()),
)
proc, _ := orch.Spawn(vega.Agent{
Name: "writer",
Model: "claude-sonnet-4-20250514",
})
resp, _ := proc.Send(ctx, msg)
Full programmatic control. Embed in any Go app.
Architecture
Vega applies Erlang's 40-year-old supervision model to AI agents. The result: agents that recover from failures automatically.
Immutable blueprint. Model, system prompt, tools, budget, retry policy. One agent can spawn many processes.
Running instance with state, messages, and metrics. Supervised by the orchestrator. Restarts on failure.
Process registry and lifecycle manager. Handles supervision strategies: OneForOne, OneForAll, RestForOne.
Install the CLI, set your API key, and have your first AI agent running in under 5 minutes.
Built by Etienne de Bruin
Vega applies Erlang's supervision model to AI agents. It started as an experiment to see if 40-year-old distributed systems patterns could make AI agents more reliable. The answer so far: yes, they can.