Documentation Index
Fetch the complete documentation index at: https://superwire.dev/llms.txt
Use this file to discover all available pages before exploring further.
This guide creates one .wire file, sends it to the executor, and reads the JSON result.
1. Start the executor
docker run --rm -p 8080:8080 rmilewski/superwire
2. Create hello.wire
input {
name: string
}
secrets {
api_key: string
}
provider llm from openai {
endpoint: "https://api.openai.com/v1"
api_key: secrets.api_key
}
model fast from llm {
id: "gpt-4.1-mini"
}
agent greeting {
model: model.fast
instruction: "Greet {{ input.name }} in one friendly sentence."
output {
message: string
}
}
output {
message: agent.greeting.message
}
3. Encode the workflow
Linux:
WORKFLOW_SOURCE_BASE64=$(base64 -w0 hello.wire)
macOS:
WORKFLOW_SOURCE_BASE64=$(base64 < hello.wire | tr -d '\n')
4. Execute it
curl -s http://localhost:8080/execute \
-H 'Content-Type: application/json' \
-d @- <<JSON
{
"input": {
"name": "Rafael"
},
"secrets": {
"api_key": "sk-..."
},
"workflow_source_base64": "$WORKFLOW_SOURCE_BASE64"
}
JSON
Expected response shape:
{
"message": "Hello Rafael — welcome to Superwire."
}
5. Use streaming when needed
/execute/stream accepts the same JSON body and returns Server-Sent Events:
curl -N http://localhost:8080/execute/stream \
-H 'Content-Type: application/json' \
-d @- <<JSON
{
"input": {
"name": "Rafael"
},
"secrets": {
"api_key": "sk-..."
},
"workflow_source_base64": "$WORKFLOW_SOURCE_BASE64"
}
JSON