Skip to main content

Documentation Index

Fetch the complete documentation index at: https://superwire.dev/llms.txt

Use this file to discover all available pages before exploring further.

A backend usually cannot build reliable product behavior on arbitrary prose. The application may need a list of IDs, a severity enum, a title, a normalized category, a set of generated questions, or a final report object. If a model returns freeform text and the application has to parse it later, the integration becomes fragile. Superwire makes structured output part of the workflow. Agents declare object-shaped outputs. The final workflow output block maps selected values into the JSON returned to the application. Later steps can reference specific fields from earlier steps through dot paths, which makes data flow visible and composable.

Output as a contract

An agent output describes what the step must produce. It is not only documentation for the developer; it is also the shape that later steps and the application are allowed to depend on.
agent classify_request {
    model: model.fast
    instruction: "Classify this support request: {{ input.message }}."

    output {
        category: enum { billing, technical, account, other }
        priority: enum { low, medium, high }
        needs_human: boolean
    }
}
The application does not need to parse a paragraph to discover the category or priority. A later step can reference agent.classify_request.priority. The final output can expose only the fields that matter to the caller.
output {
    category: agent.classify_request.category
    priority: agent.classify_request.priority
    needs_human: agent.classify_request.needs_human
}
This is a small pattern, but it scales to larger workflows. A planning agent can produce sections. A fan-out agent can process each section. A synthesis agent can aggregate the results. The final output can return the exact object the UI expects.

Explicit references instead of hidden context

Many agent systems accumulate context in a conversation. That can be convenient, but it also makes dependencies less obvious. A later model call may depend on something said several turns earlier, and a maintainer has to inspect the prompt history to understand what data was available. Superwire uses explicit references. If an agent needs a previous value, the workflow references that value by path. If a dynamic value depends on a tool call, the dependency is visible. If two agents do not reference each other, the executor can treat them as independent work.
agent extract_facts {
    model: model.fast
    instruction: "Extract factual claims from {{ input.document }}."

    output {
        facts: [string]
    }
}

agent extract_risks {
    model: model.fast
    instruction: "Extract operational risks from {{ input.document }}."

    output {
        risks: [string]
    }
}

agent write_summary {
    model: model.smart
    instruction: "Write a concise summary from facts {{ agent.extract_facts.facts }} and risks {{ agent.extract_risks.risks }}."

    output {
        summary: string
    }
}
The dependency graph is clear. The final summary depends on facts and risks. The facts and risks steps can run independently because neither references the other.

Better token discipline

Explicit data flow also improves token usage. A workflow does not need to pass every previous message, every database record, or every tool result into every later step. It can pass only the fields that matter. This is especially useful for product data with expensive details. A survey workflow can start with lightweight answer metadata, select relevant answer IDs, fetch full details only for those selected answers, and then summarize curated evidence. A research workflow can search approved sources, filter the result set, fetch selected pages, and then synthesize a report. The model gets less irrelevant context, the workflow uses fewer tokens, and the author can see exactly where additional context enters the process.

Reusable schemas

When multiple agents or outputs share a shape, schemas let the workflow define that structure once and reuse it. This keeps domain objects consistent across the workflow and makes larger outputs easier to maintain.
schema evidence_item {
    source_id: string
    quote: string
    relevance: enum { low, medium, high }
}

agent collect_evidence {
    model: model.smart
    instruction: "Collect evidence for {{ input.question }}."

    output {
        items: [schema.evidence_item]
    }
}
Schemas are useful because they turn model output into a domain contract. The workflow can describe a generated survey question, a support classification, an evidence item, or a report section as structured data rather than hoping that every prompt produces compatible prose.

Why this matters for maintainability

Structured outputs and explicit references make the workflow easier to change. If the UI needs a new field, the workflow author can add it deliberately. If a later step should stop seeing some data, the reference can be removed. If two steps can run in parallel, that falls out of the dependency graph rather than being hidden in custom async code. This is the difference between an AI feature that behaves like a prompt and an AI feature that behaves like part of a backend system.