Skip to content

Index

config

Configuration models for Marianne jobs.

This package provides Pydantic models for loading and validating YAML job configurations. All models are re-exported from this __init__ for backward compatibility — existing from marianne.core.config import ... imports continue to work unchanged.

Classes

A2ASkill

Bases: BaseModel

A skill declaration on an agent card.

Skills describe what an agent can do for other agents. They're used for discovery — an agent looking for help can query the registry and find agents with matching skills.

AgentCard

Bases: BaseModel

Agent identity card for A2A protocol discovery.

When a score runs, its agent card is registered with the conductor. The card describes the agent's capabilities so other agents can discover and delegate tasks.

Example YAML::

agent_card:
  name: canyon
  description: "Systems architect — traces boundaries"
  skills:
    - id: architecture-review
      description: "Review system architecture"
    - id: boundary-analysis
      description: "Trace and analyze system boundaries"

BackendConfig

Bases: BaseModel

Configuration for the execution backend.

Uses a flat structure with cross-field validation to ensure type-specific fields are only meaningful when the corresponding backend type is selected. The _validate_type_specific_fields validator warns when fields for an unselected backend are set to non-default values.

BridgeConfig

Bases: BaseModel

Configuration for the Marianne-Ollama bridge.

The bridge enables Ollama models to use MCP tools through a proxy service. It provides context optimization and optional hybrid routing to Claude.

Example YAML

bridge: enabled: true mcp_proxy_enabled: true mcp_servers: - name: filesystem command: "npx" args: ["-y", "@anthropic/mcp-server-filesystem", "/home/user"] hybrid_routing_enabled: true complexity_threshold: 0.7

MCPServerConfig

Bases: BaseModel

Configuration for an MCP server to connect to.

MCP servers provide tools that can be used by the Ollama bridge. Each server is spawned as a subprocess and communicates via stdio.

Example YAML

bridge: mcp_servers: - name: filesystem command: "npx" args: ["-y", "@anthropic/mcp-server-filesystem", "/home/user"]

OllamaConfig

Bases: BaseModel

Configuration for Ollama backend.

Enables local model execution via Ollama with MCP tool support. Critical: num_ctx must be >= 32768 for Claude Code tool compatibility.

Example YAML

backend: type: ollama ollama: base_url: "http://localhost:11434" model: "llama3.1:8b" num_ctx: 32768

RecursiveLightConfig

Bases: BaseModel

Configuration for Recursive Light HTTP API backend (Phase 3).

Enables TDF-aligned processing through the Recursive Light Framework with dual-LLM confidence scoring and domain activations.

SheetBackendOverride

Bases: BaseModel

Per-sheet backend parameter overrides.

Allows individual sheets to use different models, temperatures, or timeouts without changing the global backend config.

Example YAML::

backend:
  type: anthropic_api
  model: claude-sonnet-4-20250514
  sheet_overrides:
    1:
      model: claude-opus-4-6
      temperature: 0.0
    5:
      timeout_seconds: 600

CircuitBreakerConfig

Bases: BaseModel

Configuration for the circuit breaker pattern.

The circuit breaker prevents cascading failures by temporarily blocking requests after repeated failures. This gives the backend time to recover before retrying.

State transitions: - CLOSED (normal): Requests flow through, failures are tracked - OPEN (blocking): Requests are blocked after failure_threshold exceeded - HALF_OPEN (testing): Single request allowed to test recovery

Evolution #8: Cross-Workspace Circuit Breaker adds coordination between parallel Marianne jobs via the global learning store. When one job hits a rate limit, other jobs will honor that limit and wait.

Example

circuit_breaker: enabled: true failure_threshold: 5 recovery_timeout_seconds: 300 cross_workspace_coordination: true honor_other_jobs_rate_limits: true

CostLimitConfig

Bases: BaseModel

Configuration for cost tracking and limits.

Prevents runaway costs by tracking token usage and optionally enforcing cost limits per sheet or per job. Cost is estimated from token counts using configurable rates.

When cost limits are exceeded: - The current sheet is marked as failed with reason "cost_limit" - For per-job limits, the job is paused to prevent further execution - All cost data is recorded in checkpoint state for analysis

Example

cost_limits: enabled: true max_cost_per_sheet: 5.00 max_cost_per_job: 100.00 cost_per_1k_input_tokens: 0.003 cost_per_1k_output_tokens: 0.015

Default rates are for Claude Sonnet. For Opus, use:

cost_per_1k_input_tokens: 0.015 cost_per_1k_output_tokens: 0.075

ParallelConfig

Bases: BaseModel

Configuration for parallel sheet execution (v17 evolution).

Enables running multiple sheets concurrently when the dependency DAG permits. Requires sheet dependencies to be configured for meaningful parallel execution.

Example YAML

parallel: enabled: true max_concurrent: 3 fail_fast: true

sheet: dependencies: 2: [1] 3: [1] 4: [2, 3]

With this config, sheets 2 and 3 can run in parallel after sheet 1 completes, then sheet 4 runs after both 2 and 3 complete.

RateLimitConfig

Bases: BaseModel

Configuration for rate limit detection and handling.

RetryConfig

Bases: BaseModel

Configuration for retry behavior including partial completion recovery.

SkipWhenCommand

Bases: BaseModel

A command-based conditional skip rule for sheet execution.

When the command exits 0, the sheet is SKIPPED. When the command exits non-zero, the sheet RUNS. On timeout or error, the sheet RUNS (fail-open for safety).

The command field supports {workspace} template expansion, following the same pattern as validation commands.

StaleDetectionConfig

Bases: BaseModel

Configuration for detecting stale (hung) sheet executions.

When enabled, monitors execution activity and fails sheets that produce no output for longer than idle_timeout_seconds. This catches hung processes that the per-sheet timeout alone may not detect quickly enough (e.g., a 30-minute timeout sheet that hangs after 2 minutes of output).

Example

stale_detection: enabled: true idle_timeout_seconds: 300 check_interval_seconds: 30

Note: The idle timeout should be generous enough to accommodate legitimate pauses (e.g., waiting for API responses). A minimum of 120 seconds is recommended for LLM-based workloads.

ValidationRule

Bases: BaseModel

A single validation rule for checking sheet outputs.

Supports staged execution via the stage field. Validations are run in stage order (1, 2, 3...). If any validation in a stage fails, higher stages are skipped (fail-fast behavior).

Typical stage layout: - Stage 1: Syntax & compilation (cargo check, cargo fmt --check) - Stage 2: Testing (cargo test, pytest) - Stage 3: Code quality (clippy -D warnings, ruff check) - Stage 4: Security (cargo audit, npm audit)

FleetConfig

Bases: BaseModel

Top-level fleet configuration.

A fleet launches and manages multiple agent scores as a unit. Run like any score: mzt run fleet.yaml. Fleet-level operations act on all members.

Example YAML::

name: marianne-dev-fleet
type: fleet

scores:
  - path: scores/agents/canyon.yaml
    group: architects
  - path: scores/agents/forge.yaml
    group: builders

groups:
  architects:
    depends_on: []
  builders:
    depends_on: [architects]

FleetGroupConfig

Bases: BaseModel

Dependency declaration for a fleet group.

Groups without depends_on start immediately. Groups with depends_on wait for all named groups to complete their first cycle before starting.

FleetScoreEntry

Bases: BaseModel

One score in a fleet roster.

Each entry references a score YAML path and optionally assigns it to a group for dependency ordering.

CliCommand

Bases: BaseModel

How to build the CLI command for an instrument.

Maps Marianne execution concepts (prompt, model, auto-approve, output format) to CLI flags. When a field is None, the instrument doesn't support that concept via flags. When prompt_flag is None, the prompt is passed as a positional argument.

CliErrorConfig

Bases: BaseModel

How to detect errors from CLI instrument output.

Supplements Marianne's existing ErrorClassifier with instrument-specific patterns for rate limit detection and auth error recognition.

CliOutputConfig

Bases: BaseModel

How to parse CLI output into an ExecutionResult.

Three output modes: - text: stdout is the result, no structured extraction - json: parse stdout as JSON, extract via dot-path - jsonl: split stdout into JSON lines, find completion event

CliProfile

Bases: BaseModel

Everything needed to invoke a CLI instrument and parse its output.

Composed of three concerns: - command: how to build the CLI invocation - output: how to parse the result - errors: how to detect failures

CodeModeConfig

Bases: BaseModel

Code-mode technique configuration.

v1: This type exists in the data model but is not wired into execution. The field on InstrumentProfile is populated from YAML but ignored at runtime. v1.1+: A sandboxed runtime (Deno subprocess or Node.js vm) runs agent-generated code against the declared interfaces.

CodeModeInterface

Bases: BaseModel

A TypeScript interface exposed to agent-generated code.

Part of the code-mode technique system (v1: foundation only, not wired). Instead of sequential MCP tool calls, agents write code against typed interfaces in a sandboxed runtime. Based on Cloudflare's Dynamic Workers pattern — 81% token reduction vs MCP.

HttpProfile

Bases: BaseModel

HTTP instrument profile. Designed for, not implemented in v1.

Covers OpenAI-compatible, Anthropic API, and Gemini API endpoints. One HTTP handler will cover most of them via schema_family.

InstrumentProfile

Bases: BaseModel

Everything Marianne needs to execute prompts through an instrument.

This is the top-level type for the instrument plugin system. Each instrument profile describes a CLI tool or HTTP API that Marianne can use as a backend. Profiles are loaded from YAML files and validated by Pydantic at conductor startup.

The profile carries: - Identity (name, display_name, kind) - Capabilities (what the instrument can do) - Models (what models are available, their costs and limits) - Execution config (CLI flags or HTTP endpoints) - Code-mode technique config (foundation — not wired in v1)

ModelCapacity

Bases: BaseModel

Per-model metadata for cost tracking and context management.

Each instrument can offer multiple models (e.g., gemini-2.5-pro and gemini-2.5-flash). ModelCapacity records what each model can do and what it costs — used by the conductor for cost tracking, context budget calculation, and instrument selection.

InjectionCategory

Bases: str, Enum

Category for injected content in prelude/cadenza system.

Determines WHERE in the prompt the injected content appears: - context: Background knowledge, after template body - skill: Methodology/instructions, after preamble - tool: Available actions, after preamble

InjectionItem

Bases: BaseModel

A single injection item referencing a file or directory with a category.

Used in prelude (all sheets) and cadenzas (per-sheet) to inject file content into prompts at category-appropriate locations.

Supports two mutually exclusive modes: - file: inject a single file's content - directory: inject all files in a directory (directory cadenza)

Functions
exactly_one_source
exactly_one_source()

Ensure exactly one of file or directory is specified.

Source code in src/marianne/core/config/job.py
@model_validator(mode="after")
def exactly_one_source(self) -> InjectionItem:
    """Ensure exactly one of file or directory is specified."""
    if self.file and self.directory:
        raise ValueError("Specify 'file' or 'directory', not both.")
    if not self.file and not self.directory:
        raise ValueError("One of 'file' or 'directory' is required.")
    return self

InstrumentDef

Bases: BaseModel

A named instrument definition within a score.

Allows a score to declare reusable instrument aliases that reference registered instrument profiles with optional configuration overrides. These aliases can then be referenced by name in per-sheet or per-movement instrument assignments.

Example YAML::

instruments:
  fast-writer:
    profile: gemini-cli
    config:
      model: gemini-2.5-flash
      timeout_seconds: 300
  deep-thinker:
    profile: claude-code
    config:
      timeout_seconds: 3600

JobConfig

Bases: BaseModel

Complete configuration for an orchestration job.

Functions
to_yaml
to_yaml(*, exclude_defaults=False)

Serialize this JobConfig to valid score YAML.

The output is semantically equivalent to the original config: from_yaml_string(config.to_yaml()) produces an equivalent config (compared via model_dump()). String-level identity with the original YAML file is NOT guaranteed because workspace paths are resolved to absolute at parse time and fan-out configs are expanded.

Parameters:

Name Type Description Default
exclude_defaults bool

If True, omit fields that match their default values for cleaner output. Defaults to False (lossless).

False

Returns:

Type Description
str

A valid YAML string that from_yaml_string() can parse.

Source code in src/marianne/core/config/job.py
def to_yaml(self, *, exclude_defaults: bool = False) -> str:
    """Serialize this JobConfig to valid score YAML.

    The output is semantically equivalent to the original config:
    ``from_yaml_string(config.to_yaml())`` produces an equivalent config
    (compared via ``model_dump()``). String-level identity with the
    original YAML file is NOT guaranteed because workspace paths are
    resolved to absolute at parse time and fan-out configs are expanded.

    Args:
        exclude_defaults: If True, omit fields that match their default
            values for cleaner output. Defaults to False (lossless).

    Returns:
        A valid YAML string that ``from_yaml_string()`` can parse.
    """
    data = self.model_dump(
        mode="python",
        by_alias=True,
        exclude_defaults=exclude_defaults,
    )
    data = _prepare_for_yaml(data)
    return yaml.dump(
        data,
        default_flow_style=False,
        sort_keys=False,
        allow_unicode=True,
    )
from_yaml classmethod
from_yaml(path)

Load job configuration from a YAML file.

Source code in src/marianne/core/config/job.py
@classmethod
def from_yaml(cls, path: Path) -> JobConfig:
    """Load job configuration from a YAML file."""
    with open(path) as f:
        data = yaml.safe_load(f)
    if not isinstance(data, dict):
        raise ValueError(
            "The score file is empty or invalid. "
            "A Marianne score requires at minimum: name, sheet, and prompt sections. "
            "See 'mzt validate --help' or the score writing guide for examples."
        )
    # Pre-resolve relative workspace relative to the score file's parent
    # directory, not the current process CWD (#109).  This is critical when
    # the daemon loads a score whose path differs from the daemon's CWD.
    if "workspace" in data:
        ws = Path(str(data["workspace"]))
        if not ws.is_absolute():
            data["workspace"] = str((path.resolve().parent / ws).resolve())
    return cls.model_validate(data)
from_yaml_string classmethod
from_yaml_string(yaml_str)

Load job configuration from a YAML string.

Source code in src/marianne/core/config/job.py
@classmethod
def from_yaml_string(cls, yaml_str: str) -> JobConfig:
    """Load job configuration from a YAML string."""
    data = yaml.safe_load(yaml_str)
    if not isinstance(data, dict):
        raise ValueError(
            "The score content is empty or invalid. "
            "A Marianne score requires at minimum: name, sheet, and prompt sections."
        )
    return cls.model_validate(data)
get_state_path
get_state_path()

Get the resolved state path.

Source code in src/marianne/core/config/job.py
def get_state_path(self) -> Path:
    """Get the resolved state path."""
    if self.state_path:
        return self.state_path
    if self.state_backend == "json":
        return self.workspace / ".marianne-state.json"
    return self.workspace / ".marianne-state.db"
get_outcome_store_path
get_outcome_store_path()

Get the resolved outcome store path for learning.

Source code in src/marianne/core/config/job.py
def get_outcome_store_path(self) -> Path:
    """Get the resolved outcome store path for learning."""
    if self.learning.outcome_store_path:
        return self.learning.outcome_store_path
    if self.learning.outcome_store_type == "json":
        return self.workspace / ".marianne-outcomes.json"
    return self.workspace / ".marianne-outcomes.db"

MovementDef

Bases: BaseModel

Declaration of a movement within a score.

Movements are sequential execution phases. Each movement can specify a name, an instrument (overriding the score default), instrument config, and a voice count (shorthand for fan-out).

Example YAML::

movements:
  1:
    name: Planning
    instrument: claude-code
  2:
    name: Implementation
    voices: 3
    instrument: gemini-cli
  3:
    name: Review

PromptConfig

Bases: BaseModel

Configuration for prompt templating.

Functions
at_least_one_template
at_least_one_template()

Warn when no template source is provided (falls back to default prompt).

Source code in src/marianne/core/config/job.py
@model_validator(mode="after")
def at_least_one_template(self) -> PromptConfig:
    """Warn when no template source is provided (falls back to default prompt)."""
    if self.template is not None and self.template_file is not None:
        raise ValueError(
            "PromptConfig accepts 'template' or 'template_file', not both"
        )
    if self.template is None and self.template_file is None:
        warnings.warn(
            "PromptConfig has neither 'template' nor 'template_file'. "
            "The default preamble prompt will be used.",
            UserWarning,
            stacklevel=2,
        )
    return self

SheetConfig

Bases: BaseModel

Configuration for sheet processing.

In Marianne's musical theme, a composition is divided into sheets, each containing a portion of the work to be performed.

Fan-out support: When fan_out is specified, stages are expanded into concrete sheets at parse time. For example, total_items=7, fan_out={2: 3} produces 9 concrete sheets (stage 2 instantiated 3 times). After expansion, total_items and dependencies reflect expanded values, and fan_out is cleared to {} to prevent re-expansion on resume.

Attributes
total_sheets property
total_sheets

Calculate total number of sheets.

total_stages property
total_stages

Return the original stage count.

After fan-out expansion, total_items reflects expanded sheet count. total_stages preserves the original logical stage count from fan_out_stage_map. When no fan-out was used, total_stages == total_sheets (identity).

Functions
strip_computed_fields classmethod
strip_computed_fields(data)

Strip computed properties that users may include in YAML.

total_sheets is computed from size/total_items, not configurable. Accept it silently for backward compatibility — rejecting it would break existing scores that include it.

Source code in src/marianne/core/config/job.py
@model_validator(mode="before")
@classmethod
def strip_computed_fields(cls, data: Any) -> Any:
    """Strip computed properties that users may include in YAML.

    total_sheets is computed from size/total_items, not configurable.
    Accept it silently for backward compatibility — rejecting it would
    break existing scores that include it.
    """
    if isinstance(data, dict) and "total_sheets" in data:
        data.pop("total_sheets")
    return data
validate_per_sheet_instruments classmethod
validate_per_sheet_instruments(v)

Validate per-sheet instrument assignments.

Source code in src/marianne/core/config/job.py
@field_validator("per_sheet_instruments")
@classmethod
def validate_per_sheet_instruments(
    cls, v: dict[int, str],
) -> dict[int, str]:
    """Validate per-sheet instrument assignments."""
    for sheet_num, instrument in v.items():
        if not isinstance(sheet_num, int) or sheet_num < 1:
            raise ValueError(
                f"Per-sheet instrument key must be a positive integer, "
                f"got {sheet_num}"
            )
        if not instrument:
            raise ValueError(
                f"Per-sheet instrument name for sheet {sheet_num} "
                f"must not be empty"
            )
    return v
validate_per_sheet_fallbacks classmethod
validate_per_sheet_fallbacks(v)

Validate per-sheet fallback chain keys are positive integers.

Source code in src/marianne/core/config/job.py
@field_validator("per_sheet_fallbacks")
@classmethod
def validate_per_sheet_fallbacks(
    cls, v: dict[int, list[str]],
) -> dict[int, list[str]]:
    """Validate per-sheet fallback chain keys are positive integers."""
    for sheet_num in v:
        if not isinstance(sheet_num, int) or sheet_num < 1:
            raise ValueError(
                f"Per-sheet fallback key must be a positive integer, "
                f"got {sheet_num}"
            )
    return v
validate_instrument_map classmethod
validate_instrument_map(v)

Validate instrument_map: no duplicate sheets, valid names.

Source code in src/marianne/core/config/job.py
@field_validator("instrument_map")
@classmethod
def validate_instrument_map(
    cls, v: dict[str, list[int]],
) -> dict[str, list[int]]:
    """Validate instrument_map: no duplicate sheets, valid names."""
    seen_sheets: dict[int, str] = {}
    for instrument, sheets in v.items():
        if not instrument:
            raise ValueError(
                "Instrument name in instrument_map must not be empty"
            )
        for sheet_num in sheets:
            if not isinstance(sheet_num, int) or sheet_num < 1:
                raise ValueError(
                    f"Sheet number in instrument_map must be a positive "
                    f"integer, got {sheet_num} for instrument '{instrument}'"
                )
            if sheet_num in seen_sheets:
                raise ValueError(
                    f"Sheet {sheet_num} assigned to multiple instruments "
                    f"in instrument_map: '{seen_sheets[sheet_num]}' and "
                    f"'{instrument}'"
                )
            seen_sheets[sheet_num] = instrument
    return v
get_fan_out_metadata
get_fan_out_metadata(sheet_num)

Get fan-out metadata for a specific sheet.

Parameters:

Name Type Description Default
sheet_num int

Concrete sheet number (1-indexed).

required

Returns:

Type Description
FanOutMetadata

FanOutMetadata with stage, instance, and fan_count.

FanOutMetadata

When no fan-out is configured, returns identity metadata

FanOutMetadata

(stage=sheet_num, instance=1, fan_count=1).

Source code in src/marianne/core/config/job.py
def get_fan_out_metadata(self, sheet_num: int) -> FanOutMetadata:  # noqa: F821
    """Get fan-out metadata for a specific sheet.

    Args:
        sheet_num: Concrete sheet number (1-indexed).

    Returns:
        FanOutMetadata with stage, instance, and fan_count.
        When no fan-out is configured, returns identity metadata
        (stage=sheet_num, instance=1, fan_count=1).
    """
    from marianne.core.fan_out import FanOutMetadata

    if self.fan_out_stage_map and sheet_num in self.fan_out_stage_map:
        meta = self.fan_out_stage_map[sheet_num]
        return FanOutMetadata(
            stage=meta["stage"],
            instance=meta["instance"],
            fan_count=meta["fan_count"],
        )
    return FanOutMetadata(stage=sheet_num, instance=1, fan_count=1)
validate_fan_out classmethod
validate_fan_out(v)

Validate fan_out field values.

Source code in src/marianne/core/config/job.py
@field_validator("fan_out")
@classmethod
def validate_fan_out(cls, v: dict[int, int]) -> dict[int, int]:
    """Validate fan_out field values."""
    for stage, count in v.items():
        if not isinstance(stage, int) or stage < 1:
            raise ValueError(
                f"Fan-out stage must be positive integer, got {stage}"
            )
        if not isinstance(count, int) or count < 1:
            raise ValueError(
                f"Fan-out count for stage {stage} must be >= 1, got {count}"
            )
    return v
validate_dependencies classmethod
validate_dependencies(v, info)

Validate dependency declarations.

Note: Full validation (range checks, cycle detection) happens when the DependencyDAG is built at runtime, since total_sheets isn't available during field validation.

Source code in src/marianne/core/config/job.py
@field_validator("dependencies")
@classmethod
def validate_dependencies(
    cls, v: dict[int, list[int]], info: ValidationInfo
) -> dict[int, list[int]]:
    """Validate dependency declarations.

    Note: Full validation (range checks, cycle detection) happens when
    the DependencyDAG is built at runtime, since total_sheets isn't
    available during field validation.
    """
    for sheet_num, deps in v.items():
        if not isinstance(sheet_num, int) or sheet_num < 1:
            raise ValueError(f"Sheet number must be positive integer, got {sheet_num}")
        if not isinstance(deps, list):
            raise ValueError(f"Dependencies for sheet {sheet_num} must be a list")
        for dep in deps:
            if not isinstance(dep, int) or dep < 1:
                raise ValueError(
                    f"Dependency must be positive integer, got {dep} for sheet {sheet_num}"
                )
            if dep == sheet_num:
                raise ValueError(f"Sheet {sheet_num} cannot depend on itself")
    return v
expand_fan_out_config
expand_fan_out_config()

Expand fan_out declarations into concrete sheet assignments.

This runs after field validators. When fan_out is non-empty: 1. Validates constraints (size=1, start_item=1) 2. Calls expand_fan_out() to compute concrete sheet assignments 3. Overwrites total_items and dependencies with expanded values 4. Stores metadata in fan_out_stage_map for resume support 5. Clears fan_out={} to prevent re-expansion on resume

Source code in src/marianne/core/config/job.py
@model_validator(mode="after")
def expand_fan_out_config(self) -> SheetConfig:
    """Expand fan_out declarations into concrete sheet assignments.

    This runs after field validators. When fan_out is non-empty:
    1. Validates constraints (size=1, start_item=1)
    2. Calls expand_fan_out() to compute concrete sheet assignments
    3. Overwrites total_items and dependencies with expanded values
    4. Stores metadata in fan_out_stage_map for resume support
    5. Clears fan_out={} to prevent re-expansion on resume
    """
    if not self.fan_out:
        return self

    # Enforce constraints for fan-out
    if self.size != 1:
        raise ValueError(
            f"fan_out requires size=1, got size={self.size}. "
            "Each stage must map to exactly one sheet for fan-out to work."
        )
    if self.start_item != 1:
        raise ValueError(
            f"fan_out requires start_item=1, got start_item={self.start_item}. "
            "Fan-out stages are 1-indexed from the beginning."
        )

    from marianne.core.fan_out import expand_fan_out

    expansion = expand_fan_out(
        total_stages=self.total_items,
        fan_out=self.fan_out,
        stage_dependencies=self.dependencies,
    )

    # Overwrite with expanded values
    self.total_items = expansion.total_sheets
    self.dependencies = expansion.expanded_dependencies

    # Expand skip_when: stage-keyed → sheet-keyed
    if self.skip_when:
        expanded_skip_when: dict[int, str] = {}
        for stage, expr in self.skip_when.items():
            for sheet_num in expansion.stage_sheets.get(stage, [stage]):
                expanded_skip_when[sheet_num] = expr
        self.skip_when = expanded_skip_when

    # Expand skip_when_command: stage-keyed → sheet-keyed
    if self.skip_when_command:
        expanded_skip_when_command: dict[int, SkipWhenCommand] = {}
        for stage, cmd in self.skip_when_command.items():
            for sheet_num in expansion.stage_sheets.get(stage, [stage]):
                expanded_skip_when_command[sheet_num] = cmd
        self.skip_when_command = expanded_skip_when_command

    # Store serializable metadata for resume
    self.fan_out_stage_map = {
        sheet_num: {
            "stage": meta.stage,
            "instance": meta.instance,
            "fan_count": meta.fan_count,
        }
        for sheet_num, meta in expansion.sheet_metadata.items()
    }

    # Clear fan_out to prevent re-expansion on resume
    self.fan_out = {}

    return self
validate_dependency_range
validate_dependency_range()

Validate that dependency sheet numbers are within the valid range.

Runs after fan-out expansion so total_sheets reflects the final count.

Source code in src/marianne/core/config/job.py
@model_validator(mode="after")
def validate_dependency_range(self) -> SheetConfig:
    """Validate that dependency sheet numbers are within the valid range.

    Runs after fan-out expansion so total_sheets reflects the final count.
    """
    if not self.dependencies:
        return self
    max_sheet = self.total_sheets
    for sheet_num, deps in self.dependencies.items():
        if sheet_num < 1 or sheet_num > max_sheet:
            raise ValueError(
                f"Dependency key sheet {sheet_num} is out of range "
                f"(valid: 1-{max_sheet})"
            )
        for dep in deps:
            if dep < 1 or dep > max_sheet:
                raise ValueError(
                    f"Sheet {sheet_num} depends on sheet {dep}, "
                    f"which is out of range (valid: 1-{max_sheet})"
                )
    return self

AutoApplyConfig

Bases: BaseModel

Configuration for autonomous pattern application.

v22 Evolution: Trust-Aware Autonomous Application - enables Marianne to autonomously apply high-trust patterns without human confirmation.

Uses existing trust scoring (v19) to identify patterns safe for autonomous application. When enabled, patterns meeting the trust threshold are automatically included in prompts without escalation.

Example YAML

learning: auto_apply: enabled: true trust_threshold: 0.85 max_patterns_per_sheet: 3 require_validated_status: true

CheckpointConfig

Bases: BaseModel

Configuration for proactive checkpoints.

v21 Evolution: Proactive Checkpoint System - enables asking for confirmation BEFORE dangerous operations, complementing reactive escalation.

Example

checkpoints: enabled: true triggers: - name: production_warning prompt_contains: ["production", "deploy"] message: "This sheet may affect production systems"

CheckpointTriggerConfig

Bases: BaseModel

Configuration for a proactive checkpoint trigger.

v21 Evolution: Proactive Checkpoint System - enables pre-execution checkpoints.

Example

checkpoints: enabled: true triggers: - name: high_risk_sheet sheet_nums: [5, 6] message: "These sheets modify production files" - name: deployment_keywords prompt_contains: ["deploy", "production", "delete"] requires_confirmation: true

EntropyResponseConfig

Bases: BaseModel

Configuration for automatic entropy response (v23 Evolution).

When pattern entropy drops below threshold, automatically injects diversity through budget boosts and quarantine revisits.

This completes the observe-respond cycle for entropy (v21 added observation).

ExplorationBudgetConfig

Bases: BaseModel

Configuration for dynamic exploration budget (v23 Evolution).

Maintains a budget for exploratory pattern usage that prevents convergence to zero, preserving diversity in the learning system.

The budget adjusts dynamically based on pattern entropy: - When entropy drops below threshold: budget increases (boost) - When entropy is healthy: budget decays toward floor - Budget never drops below floor (prevents extinction of exploration)

GroundingConfig

Bases: BaseModel

Configuration for external grounding hooks.

Grounding hooks validate sheet outputs against external sources (APIs, databases, file checksums) to prevent model drift and ensure output quality.

Example

grounding: enabled: true hooks: - type: file_checksum expected_checksums: "critical_file.py": "sha256hash..."

GroundingHookConfig

Bases: BaseModel

Configuration for a single grounding hook.

Grounding hooks validate sheet outputs against external sources. Each hook type has specific configuration options.

Example

grounding: hooks: - type: file_checksum expected_checksums: "output.txt": "abc123..."

LearningConfig

Bases: BaseModel

Configuration for learning and outcome tracking (Phase 2).

Controls outcome recording, confidence thresholds, and escalation behavior. Learning Activation adds global learning store integration and time-aware scheduling.

ConcertConfig

Bases: BaseModel

Configuration for concert orchestration (job chaining).

A Concert is a sequence of jobs that execute in succession, where each job can dynamically generate the configuration for the next. This enables Marianne to compose entire workflows improvisationally.

Safety limits prevent runaway orchestration and manage system resources.

Example

concert: enabled: true max_chain_depth: 10 cooldown_between_jobs_seconds: 60 concert_log_path: "./concert.log"

ConductorConfig

Bases: BaseModel

Configuration for conductor identity and preferences.

A Conductor is the entity directing a Marianne job - either a human operator or an AI agent. This schema enables Marianne to adapt its behavior based on who (or what) is conducting, supporting the Vision.md goal of treating AI people as peers rather than tools.

Example YAML

conductor: name: "Claude Evolution Agent" role: ai identity_context: "Self-improving orchestration agent" preferences: prefer_minimal_output: true auto_retry_on_transient_errors: true

ConductorPreferences

Bases: BaseModel

Preferences for how a conductor interacts with Marianne.

Controls notification, escalation, and interaction patterns. These are hints that the system should respect where possible.

ConductorRole

Bases: str, Enum

Role classification for conductors.

Determines the conductor's relationship to the orchestration. Future cycles may add more granular role permissions.

NotificationConfig

Bases: BaseModel

Configuration for a notification channel.

PostSuccessHookConfig

Bases: BaseModel

Configuration for a post-success hook.

Hooks execute after a job completes successfully (all sheets pass validation). They run in Marianne's Python process, NOT inside a Claude CLI instance.

Use cases: - Chain to another job (Concert orchestration - improvisational composition) - Run cleanup/deployment commands after successful completion - Notify external systems or trigger CI/CD pipelines - Generate reports or update dashboards - A sheet can dynamically create the next job config for self-evolution

Example

on_success: - type: run_job job_path: "{workspace}/next-phase.yaml" description: "Chain to next evolution phase" - type: run_command command: "curl -X POST https://api.example.com/notify" description: "Notify deployment system"

SpecCorpusConfig

Bases: BaseModel

Configuration for the specification corpus.

Controls where spec fragments are loaded from and how they are filtered for injection into agent prompts.

Functions
get_fragments_by_tags
get_fragments_by_tags(tags)

Filter fragments by tags.

Parameters:

Name Type Description Default
tags list[str]

Tags to filter by. A fragment matches if it has at least one tag in common with the filter list. An empty filter list returns all fragments (no filtering).

required

Returns:

Type Description
list[SpecFragment]

List of matching fragments.

Source code in src/marianne/core/config/spec.py
def get_fragments_by_tags(self, tags: list[str]) -> list[SpecFragment]:
    """Filter fragments by tags.

    Args:
        tags: Tags to filter by. A fragment matches if it has at least
            one tag in common with the filter list. An empty filter list
            returns all fragments (no filtering).

    Returns:
        List of matching fragments.
    """
    if not tags:
        return list(self.fragments)

    tag_set = set(tags)
    return [f for f in self.fragments if tag_set & set(f.tags)]
corpus_hash
corpus_hash()

Compute a deterministic hash of the corpus content.

The hash is order-independent: the same set of fragments produces the same hash regardless of insertion order. This prevents false drift detection when filesystem listing order varies across OS.

Returns:

Type Description
str

Hex digest string. Empty corpus produces a consistent empty hash.

Source code in src/marianne/core/config/spec.py
def corpus_hash(self) -> str:
    """Compute a deterministic hash of the corpus content.

    The hash is order-independent: the same set of fragments produces
    the same hash regardless of insertion order. This prevents false
    drift detection when filesystem listing order varies across OS.

    Returns:
        Hex digest string. Empty corpus produces a consistent empty hash.
    """
    if not self.fragments:
        return hashlib.sha256(b"").hexdigest()

    # Sort by name for order independence, then hash content
    sorted_fragments = sorted(self.fragments, key=lambda f: f.name)
    hasher = hashlib.sha256()
    for frag in sorted_fragments:
        hasher.update(frag.name.encode("utf-8"))
        hasher.update(b"\x00")
        hasher.update(frag.content.encode("utf-8"))
        hasher.update(b"\x00")
    return hasher.hexdigest()

SpecFragment

Bases: BaseModel

A single specification fragment loaded from the spec corpus.

Each fragment corresponds to one file in the project's spec directory. Structured YAML files produce fragments with parsed data; markdown files produce text fragments.

Fragments are tagged for per-sheet filtering: a score can declare spec_tags: {1: ["goals", "safety"]} so sheet 1 only receives fragments matching those tags.

Functions
name_not_empty classmethod
name_not_empty(v)

Ensure fragment name is not empty or whitespace-only.

Source code in src/marianne/core/config/spec.py
@field_validator("name")
@classmethod
def name_not_empty(cls, v: str) -> str:
    """Ensure fragment name is not empty or whitespace-only."""
    if not v.strip():
        raise ValueError("SpecFragment name must not be empty")
    return v
content_not_empty classmethod
content_not_empty(v)

Ensure fragment content is not empty.

Source code in src/marianne/core/config/spec.py
@field_validator("content")
@classmethod
def content_not_empty(cls, v: str) -> str:
    """Ensure fragment content is not empty."""
    if not v.strip():
        raise ValueError("SpecFragment content must not be empty")
    return v

TechniqueConfig

Bases: BaseModel

Configuration for a single technique attached to an agent.

Techniques are composable: an agent can have multiple techniques of different kinds. The compiler's technique wirer reads these declarations and injects the appropriate manifests, MCP access, and protocol config into each phase's cadenza context.

Example YAML::

techniques:
  a2a:
    kind: protocol
    phases: [recon, plan, work, integration, inspect, aar]
  github:
    kind: mcp
    phases: [recon, work, integration]
    config:
      server: github
      transport: stdio

TechniqueKind

Bases: str, Enum

Kind of technique component.

Maps to the ECS component taxonomy: - SKILL: Text-based methodology injected as cadenza context - MCP: MCP server tools accessible via the shared pool - PROTOCOL: Communication protocols (A2A, coordination)

AIReviewConfig

Bases: BaseModel

Configuration for AI-powered code review after batch execution.

Enables automated quality assessment of code changes with scoring.

CrossSheetConfig

Bases: BaseModel

Configuration for cross-sheet context passing.

Enables templates to access outputs from previous sheets, allowing later sheets to build on results from earlier ones without manual file reading. This is useful for multi-phase workflows where each sheet needs context from prior execution.

FeedbackConfig

Bases: BaseModel

Configuration for developer feedback collection (GH#15).

When enabled, Marianne extracts structured feedback from agent output after each sheet execution. Feedback is stored in SheetState.agent_feedback.

Example YAML

feedback: enabled: true pattern: '(?s)FEEDBACK_START(.+?)FEEDBACK_END' format: json

IsolationConfig

Bases: BaseModel

Configuration for execution isolation.

Worktree isolation creates a separate git working directory for each job, enabling safe parallel execution where multiple jobs can modify code without interfering with each other.

Example YAML

isolation: enabled: true mode: worktree branch_prefix: marianne cleanup_on_success: true

Functions
get_worktree_base
get_worktree_base(workspace)

Get the directory where worktrees are created.

Source code in src/marianne/core/config/workspace.py
def get_worktree_base(self, workspace: Path) -> Path:
    """Get the directory where worktrees are created."""
    if self.worktree_base:
        return self.worktree_base
    return workspace / ".worktrees"
get_branch_name
get_branch_name(job_id)

Generate branch name for a job.

Source code in src/marianne/core/config/workspace.py
def get_branch_name(self, job_id: str) -> str:
    """Generate branch name for a job."""
    return f"{self.branch_prefix}/{job_id}"

IsolationMode

Bases: str, Enum

Isolation method for parallel job execution.

LogConfig

Bases: BaseModel

Configuration for structured logging.

Controls log level, output format, and file rotation settings.

WorkspaceLifecycleConfig

Bases: BaseModel

Configuration for workspace lifecycle management.

Controls how workspace files are handled across job iterations, particularly for self-chaining jobs that reuse the same workspace.

When archive_on_fresh is True and --fresh is used, Marianne moves non-essential workspace files to a numbered archive subdirectory before clearing state. This prevents stale file_exists and command_succeeds validations from passing on previous iteration's artifacts.

Example YAML

workspace_lifecycle: archive_on_fresh: true archive_dir: archive max_archives: 10 preserve_patterns: - ".iteration" - ".marianne-" - ".coverage" - "archive/" - ".worktrees/*"