Skip to main content

Documentation Index

Fetch the complete documentation index at: https://superwire.dev/llms.txt

Use this file to discover all available pages before exploring further.

Superwire is a declarative DSL for controlled server-side AI workflows. A .wire file describes the inputs a workflow accepts, the secrets it needs, the providers and model profiles it may use, the MCP capabilities it can access, the agents that should run, the dependencies between those agents, and the final structured output. Most applications do not embed the runtime directly. They send a .wire workflow to the Superwire executor server and receive either one JSON response or a stream of Server-Sent Events. Superwire is most useful when an AI feature needs to behave like product infrastructure: scoped tools, explicit data flow, structured outputs, validation, streaming, and testable execution. For the product-level rationale, start with the separate Why Superwire section before the syntax reference.
.wire source + input + secrets -> Superwire executor -> structured output

Run the executor

The executor is published as a Docker image:
docker run --rm -p 8080:8080 rmilewski/superwire
It exposes two endpoints:
EndpointResponse styleUse when
POST /executeJSONYou want the final workflow result.
POST /execute/streamServer-Sent EventsYou want progress events while the workflow runs.
Both endpoints use the same request body.

A minimal workflow

Create hello.wire:
input {
    message: string
}

secrets {
    api_key: string
}

provider llm from openai {
    endpoint: "https://api.openai.com/v1"
    api_key: secrets.api_key
}

model fast from llm {
    id: "gpt-4.1-mini"
}

agent reply {
    model: model.fast
    instruction: "Reply to this message: {{ input.message }}"

    output {
        message: string
    }
}

output {
    result: agent.reply
}
This workflow has one public input, one secret, one provider instance, one model profile, one agent, and one final output.

Execute it

Base64-encode the workflow source:
WORKFLOW_SOURCE_BASE64=$(base64 -w0 hello.wire)
On macOS:
WORKFLOW_SOURCE_BASE64=$(base64 < hello.wire | tr -d '\n')
Send it to the executor:
curl -s http://localhost:8080/execute \
  -H 'Content-Type: application/json' \
  -d @- <<JSON
{
  "input": {
    "message": "Write a short welcome message for Superwire."
  },
  "secrets": {
    "api_key": "sk-..."
  },
  "workflow_source_base64": "$WORKFLOW_SOURCE_BASE64"
}
JSON
The response is the workflow output block as JSON:
{
  "result": {
    "message": "Welcome to Superwire — a concise way to describe and run AI workflows."
  }
}

Stream the same workflow

Use /execute/stream when your UI needs progress events:
curl -N http://localhost:8080/execute/stream \
  -H 'Content-Type: application/json' \
  -d @- <<JSON
{
  "input": {
    "message": "Write a short welcome message for Superwire."
  },
  "secrets": {
    "api_key": "sk-..."
  },
  "workflow_source_base64": "$WORKFLOW_SOURCE_BASE64"
}
JSON
A stream contains lifecycle events such as workflow start, planning, agent start, agent completion, tool/MCP events, and final workflow completion.
  • Why Superwire is a separate conceptual section for understanding the product-backend problem, use cases, and decision criteria.
  • Quickstart walks through the same flow step by step.
  • Core Concepts explains workflows, providers, models, agents, schemas, and dependencies.
  • Executor API documents the HTTP request and response contract.