Skip to main content

Documentation Index

Fetch the complete documentation index at: https://superwire.dev/llms.txt

Use this file to discover all available pages before exploring further.

A Superwire workflow is a dependency graph built from declarations.
input {
    message: string
}

secrets {
    api_key: string
}

provider llm from openai {
    endpoint: "https://api.openai.com/v1"
    api_key: secrets.api_key
}

model fast from llm {
    id: "gpt-4.1-mini"
}

agent reply {
    model: model.fast
    instruction: "Reply to {{ input.message }}"
    output {
        message: string
    }
}

output {
    result: agent.reply
}

Main declarations

DeclarationPurpose
inputPublic values supplied by the executor request.
secretsSensitive values supplied by the executor request.
providerConfigures access to a backend driver such as openai or ollama.
modelDefines a reusable model profile from a provider instance.
schemaDefines reusable structured types.
mcpDefines an MCP server endpoint.
tool, prompt, resourceImport external MCP capabilities.
dynamicBuilds runtime values from expressions.
agentRuns one model step.
outputDefines the final JSON response.

Runtime topology

driver -> provider instance -> model profile -> agent
A provider declaration does not select a model by itself. A model profile selects the provider-specific model ID. Agents reference named model profiles.

Design goals

  • Keep secrets outside .wire files.
  • Keep provider access separate from model selection.
  • Make dependencies explicit through references.
  • Require structured outputs so downstream steps can rely on stable shapes.
  • Let the executor schedule independent work concurrently.