Documentation Index
Fetch the complete documentation index at: https://superwire.dev/llms.txt
Use this file to discover all available pages before exploring further.
Keep the first version small
Start with one provider, one model, one agent, and one output. Add schemas, tools, prompts, resources, and parallelism only when the workflow needs them.
Keep secrets out of instructions
Do not interpolate secrets into model instructions unless the external provider explicitly needs that value. Provider keys and MCP tokens belong in provider or MCP configuration.
Prefer structured outputs
agent classify {
model: model.fast
instruction: "Classify {{ input.message }}."
output {
category: enum { bug, feature, question }
confidence: number
}
}
Structured outputs are easier to validate, reference, and aggregate.
Split fan-out and aggregation
Use separate steps when a workflow needs to process many items and then produce one final result.
input {
project_id: number
}
secrets {
api_key: string
}
provider llm from openai {
endpoint: "https://api.openai.com/v1"
api_key: secrets.api_key
}
model fast from llm {
id: "gpt-4.1-mini"
}
mcp tasks {
endpoint: "http://localhost:8000/mcp/tasks"
}
tool fetch_tasks from mcp.tasks.tool.fetch_tasks {
input {
project_id: number
}
output {
tasks: [{
id: number
title: string
description: string
status: string
}]
}
}
dynamic {
task_list: call tool.fetch_tasks {
bindings {
project_id: input.project_id
}
}
}
agent analyze_task for task in task_list.tasks {
model: model.fast
context: context()
instruction: """
Analyze this task and identify the next useful action.
Task title: {{ task.title }}
Task description: {{ task.description }}
Task status: {{ task.status }}
"""
output {
task_id: number
summary: string
recommended_action: string
}
}
agent summarize_project {
model: model.fast
instruction: """
Create a project-level summary from these task analyses:
{{ agent.analyze_task }}
"""
output {
summary: string
next_actions: [string]
}
}
output {
task_analyses: agent.analyze_task
project_summary: agent.summarize_project
}
analyze_task runs once per task, and those iterations may run in parallel. The final summarize_project agent aggregates the array produced by agent.analyze_task.
When a looped agent provides context: context(), each parallel execution receives its own context object. That gives the runtime a stable per-iteration context for cache matching when caching is enabled.
Continue context only when needed
Use context(agent.previous) when the next agent should continue the previous message history. Use structured references when you only need data.