Documentation Index
Fetch the complete documentation index at: https://superwire.dev/llms.txt
Use this file to discover all available pages before exploring further.
Superwire separates provider drivers, provider instances, and model profiles.
driver -> provider instance -> model profile -> agent
Provider declarations
e
A provider declaration creates a named provider instance from a provider driver.
secrets {
api_key: string
}
provider llm from openai {
endpoint: "https://api.openai.com/v1"
api_key: secrets.api_key
}
openai is the driver. llm is the provider instance name. Provider drivers are implemented by the Superwire engine and must be supported by the executor.
Model declarations
A model declaration creates a reusable model profile from a provider instance.
model fast from llm {
id: "gpt-4.1-mini"
}
The id is the provider-specific model identifier sent to the provider driver.
Agents reference model profiles through the model namespace:
agent reply {
model: model.fast
instruction: "Reply to {{ input.message }}"
output {
message: string
}
}
Inference defaults
Inference configuration belongs on the model profile when it should apply to every agent using that profile.
model fast from llm {
id: "gpt-4.1-mini"
inference {
temperature: 0.2
max_tokens: 4_000
}
}
An agent can override inference for that specific model usage by opening a block after the model reference:
agent creative_reply {
model: model.fast {
inference {
temperature: 0.8
}
}
instruction: "Write a creative reply to {{ input.message }}"
output {
message: string
}
}
The usage block specializes the model for that agent. It does not create a new named model profile.
Multiple model profiles
Define multiple profiles when different agents should use different model IDs or inference defaults.
model fast from llm {
id: "gpt-4.1-mini"
}
model smart from llm {
id: "gpt-4.1"
}
model cheap from llm {
id: "gpt-4.1-nano"
}
Then each agent chooses the profile that matches its job:
agent quick_reply {
model: model.fast
instruction: "Reply briefly to {{ input.message }}"
output {
message: string
}
}
agent careful_review {
model: model.smart
instruction: "Review this carefully: {{ input.message }}"
output {
review: string
}
}
Provider-level defaults
Provider-level defaults are optional and should be reserved for settings that truly apply to all model calls through that provider, such as timeout or retry configuration.
provider llm from openai {
endpoint: "https://api.openai.com/v1"
api_key: secrets.api_key
inference {
timeout_seconds: 60
retry_attempts: 2
}
}