Skip to content

Sub-Workflows

The workflow block type lets you execute an entire child workflow as a single step in a parent workflow. The child runs in an isolated state, with explicit input/output mapping as the only channel between parent and child. This implements a Hierarchical State Machine (HSM) pattern.

Use sub-workflows when you have a reusable pipeline that multiple parent workflows call, or when you want to encapsulate a complex sequence behind a clean interface. Common patterns:

  • A “summarize” sub-workflow called by different analysis pipelines
  • A “review and revise” loop packaged as a reusable unit
  • Breaking a large workflow into composable, testable pieces

The child workflow is a standard YAML workflow file with an interface section that declares its public inputs and outputs. The interface is required — the engine rejects workflow blocks that reference children without one.

custom/workflows/summarizer.yaml
version: "1.0"
interface:
inputs:
- name: topic
target: shared_memory.topic
type: string
required: true
- name: max_words
target: shared_memory.max_words
type: integer
required: false
default: 500
outputs:
- name: summary
source: results.summarize
type: string
blocks:
research:
type: linear
soul_ref: researcher
summarize:
type: linear
soul_ref: writer
depends: research
workflow:
name: Summarizer
entry: research
FieldTypeDefaultDescription
namestrrequiredInput parameter name (must be unique)
targetstrrequiredDot-notation path into the child’s state (e.g. shared_memory.topic)
typestrnoneType hint for documentation
requiredbooltrueWhether the parent must provide this input
defaultAnynoneDefault value when the parent omits this input
descriptionstrnoneHuman-readable description
FieldTypeDefaultDescription
namestrrequiredOutput parameter name (must be unique)
sourcestrrequiredDot-notation path in child’s final state (e.g. results.summarize)
typestrnoneType hint for documentation
descriptionstrnoneHuman-readable description

In the parent workflow, add a workflow block with workflow_ref pointing to the child, and map inputs and outputs using interface names.

custom/workflows/analysis-pipeline.yaml
version: "1.0"
blocks:
gather:
type: linear
soul_ref: collector
run_summary:
type: workflow
workflow_ref: summarizer
inputs:
topic: results.gather # interface name → parent state path
outputs:
results.final_summary: summary # parent state path → interface name
on_error: catch
depends: gather
present:
type: linear
soul_ref: presenter
depends: run_summary
workflow:
name: Analysis Pipeline
entry: gather
FieldTypeDefaultDescription
workflow_refstrrequiredFile stem or path of the child workflow
inputsDict[str, str]noneInterface name mapped to parent state path
outputsDict[str, str]noneParent state path mapped to interface output name
max_depthintnoneMaximum nesting depth (falls back to config.max_workflow_depth, default 10)
on_errorstr"raise""raise" or "catch"

Input keys are interface names (plain strings like topic), not child state paths. The engine resolves the interface name to the child’s target path. The values are parent state paths using dot-notation: results.gather, shared_memory.topic, current_task.

Output keys are parent state paths where values get written. Output values are interface names from the child’s interface. The engine resolves the interface name to the child’s source path and copies the value into the parent state.

The on_error field controls what happens when the child workflow fails.

The child’s exception propagates to the parent. The parent workflow fails at the workflow block. If the parent block has an error_route, the engine routes there.

The parent workflow continues even if the child fails. The workflow block produces a BlockResult with:

  • exit_handle: "error"
  • output: an error description string
  • metadata: includes child_status: "failed", child_error, child_cost_usd, child_tokens, child_duration_s

This also catches soft errors — if any child block completed with exit_handle: "error", the parent treats the entire child run as failed.

catch pattern with error routing
blocks:
risky_sub:
type: workflow
workflow_ref: experimental-pipeline
on_error: catch
error_route: fallback
fallback:
type: linear
soul_ref: fallback_handler

Each sub-workflow execution creates a separate run record linked to the parent run. The child gets its own observer for independent monitoring. The BlockResult.metadata on the parent includes child_run_id for drill-down.

Cost and token usage from the child are propagated back to the parent — total_cost_usd and total_tokens accumulate across the hierarchy.

The child workflow receives a clean WorkflowState. It does not inherit the parent’s results, shared memory, or execution log. The only data the child sees is what the parent explicitly passes through inputs.

Similarly, the parent only receives data from the child through the outputs mapping. No results leak from child to parent outside of the declared interface.

The engine tracks a call stack of workflow names during execution. Two safety mechanisms prevent runaway recursion:

  • Cycle detection: if a workflow name already appears in the call stack, the engine raises a RecursionError. A workflow cannot call itself, directly or indirectly.
  • Depth limit: the call stack depth is checked against max_depth before each child execution. The default limit is 10, configurable per-block or via config.max_workflow_depth in the workflow file.

The workflow_ref value is resolved in this order:

  1. Named workflow in the validation index (matched by file path, stem, or workflow name)
  2. Absolute file path
  3. Relative to project root
  4. Relative to custom/workflows/
  5. With .yaml or .yml extension appended

The simplest form is the file stem: workflow_ref: summarizer resolves to custom/workflows/summarizer.yaml.