Vega v0.4

Open source · Go library · YAML DSL · MIT License

Build AI agent teams
that actually work

Vega is a fault-tolerant orchestration framework for AI agents. Erlang-style supervision, 850+ integrations via MCP, and a Slack-like dashboard where you watch your agents collaborate.

Install

$ brew install everydev1618/tap/vega

$ vega serve

▶ Dashboard ready at http://localhost:3001

850+

App integrations via Composio

Go + YAML

Library or no-code DSL

SQLite

Zero-config persistence

MCP

Model Context Protocol native

Everything you need to orchestrate agents

From a single agent to a full company of AI workers. Vega handles fault tolerance, team coordination, integrations, and observability.

Chat to build

Tell Iris what you need. She delegates to Hera, who builds agents, teams, and channels on the fly. No code required.

850+ integrations

Connect Composio, Slack, GitHub, Gmail, and more via MCP. Add new services at runtime through chat.

Fault-tolerant

Erlang-style supervision trees. Processes restart automatically. Errors are classified and retried with backoff.

Teams & channels

Agents organize into teams with leads and members. Team channels provide transparent collaboration the user can watch.

YAML DSL

Define multi-agent workflows in YAML with if/else, for-each, parallel, and try/catch. No programming required.

Real-time streaming

Token-by-token SSE streaming with live tool call indicators. Built-in REST API and web dashboard.

From idea to AI workforce in minutes

No YAML required. Just talk to Iris and she'll build what you need.

01

You talk to Iris

Iris is your chief of staff. Tell her what you need done. She understands your goals and figures out who should do the work.

02

Hera builds the team

Need a new capability? Hera creates agents on the fly with the right tools, personality, and team structure. No restart needed.

03

Agents collaborate

Team leads delegate to members. Agents post updates to channels. You watch it happen in real time through the dashboard.

04

Everything persists

Agents, conversations, memory, and MCP connections survive restarts. Your AI workforce is always ready.

Three ways to use Vega

Chat with Iris

# Just talk

You: I need a content team

You: that can write blog posts

You: and post to our Slack

Iris: On it. I'll have Hera

build that for you.

Hera: Done, love. Meet your

team: Sofia (writer) and

Marcus (editor). Check

#content channel.

No code. No config files. Just describe what you need.

YAML DSL

name: ContentTeam

settings:

mcp:

servers:

- name: composio

agents:

writer:

model: claude-sonnet-4-20250514

system: |

You write engaging blog posts.

tools:

- composio__slack_send

Declarative. Version-controlled. Reproducible.

Go library

import (

"github.com/everydev1618/govega"

"github.com/everydev1618/govega/llm"

)

orch := vega.NewOrchestrator(

vega.WithLLM(llm.NewAnthropic()),

)

proc, _ := orch.Spawn(vega.Agent{

Name: "writer",

Model: "claude-sonnet-4-20250514",

})

resp, _ := proc.Send(ctx, msg)

Full programmatic control. Embed in any Go app.

Built on proven patterns

Vega applies Erlang's 40-year-old supervision model to AI agents. The result: agents that recover from failures automatically.

A

Agent

Immutable blueprint. Model, system prompt, tools, budget, retry policy. One agent can spawn many processes.

P

Process

Running instance with state, messages, and metrics. Supervised by the orchestrator. Restarts on failure.

O

Orchestrator

Process registry and lifecycle manager. Handles supervision strategies: OneForOne, OneForAll, RestForOne.

Start building with Vega

Install the CLI, set your API key, and have your first AI agent running in under 5 minutes.

Etienne de Bruin

Built by Etienne de Bruin

@everydev1618  ·  @etdebruin

Vega applies Erlang's supervision model to AI agents. It started as an experiment to see if 40-year-old distributed systems patterns could make AI agents more reliable. The answer so far: yes, they can.