Skip to content

patterns

patterns

Pattern detection and application for learning from outcomes.

This module implements the "Close the Learning Loop" evolution: - PatternDetector: Analyzes outcomes to detect recurring patterns - PatternMatcher: Matches patterns to current execution context - PatternApplicator: Generates prompt modifications from patterns

The pattern system enables Marianne to learn from past executions and apply that learning to improve future sheet executions.

Classes

PatternType

Bases: Enum

Types of patterns that can be detected from outcomes.

Attributes
VALIDATION_FAILURE class-attribute instance-attribute
VALIDATION_FAILURE = 'validation_failure'

Recurring validation failure pattern (e.g., file not created).

RETRY_SUCCESS class-attribute instance-attribute
RETRY_SUCCESS = 'retry_success'

Pattern where retry succeeds after specific failure.

COMPLETION_MODE class-attribute instance-attribute
COMPLETION_MODE = 'completion_mode'

Pattern where completion mode is effective.

SUCCESS_WITHOUT_RETRY class-attribute instance-attribute
SUCCESS_WITHOUT_RETRY = 'first_attempt_success'

Pattern of success without retry (positive pattern).

HIGH_CONFIDENCE class-attribute instance-attribute
HIGH_CONFIDENCE = 'high_confidence'

Pattern with high validation confidence.

LOW_CONFIDENCE class-attribute instance-attribute
LOW_CONFIDENCE = 'low_confidence'

Pattern with low validation confidence (needs attention).

SEMANTIC_FAILURE class-attribute instance-attribute
SEMANTIC_FAILURE = 'semantic_failure'

Pattern detected from semantic failure_reason/failure_category analysis.

These patterns are extracted from the failure_reason and failure_category fields in ValidationResult, providing deeper insight into WHY failures occur. Examples: - 'stale' category appearing frequently (files not modified) - 'file not created' reason appearing across multiple sheets

OUTPUT_PATTERN class-attribute instance-attribute
OUTPUT_PATTERN = 'output_pattern'

Pattern extracted from stdout/stderr output during execution.

These patterns are detected by analyzing the raw output text for common error signatures, stack traces, and failure indicators. Useful for learning from execution-level failures that may not be captured by validation.

SEMANTIC_INSIGHT class-attribute instance-attribute
SEMANTIC_INSIGHT = 'semantic_insight'

Pattern generated by LLM-based semantic analysis of sheet completions.

These patterns are produced by the conductor's SemanticAnalyzer, which examines sheet output, validation results, and error history to generate deeper insights about why executions succeed or fail. Unlike statistically detected patterns, semantic insights capture nuanced reasoning about prompt effectiveness, agent behavior, and anti-patterns.

RESOURCE_ANOMALY class-attribute instance-attribute
RESOURCE_ANOMALY = 'resource_anomaly'

Resource anomaly detected during execution (memory spike, zombie, OOM).

These patterns are produced by the profiler's AnomalyDetector, which runs heuristic checks on each system snapshot. No LLM calls — pure threshold-based detection of memory spikes, runaway processes, zombies, FD exhaustion, and memory pressure.

RESOURCE_CORRELATION class-attribute instance-attribute
RESOURCE_CORRELATION = 'resource_correlation'

Learned correlation between resource usage and outcomes.

These patterns are produced by the profiler's CorrelationAnalyzer, which periodically cross-references resource profiles of completed jobs with their outcomes (success/failure, validation results) to identify statistical patterns like high-RSS-predicts-failure.

ExtractedPattern dataclass

ExtractedPattern(pattern_name, matched_text, line_number, context_before=list(), context_after=list(), confidence=0.8, source='stdout')

A pattern extracted from stdout/stderr output analysis.

Represents a specific error or failure pattern found in execution output, with context about where it appeared and confidence in the detection.

Attributes
pattern_name instance-attribute
pattern_name

Canonical name for this pattern (e.g., 'rate_limit', 'import_error').

matched_text instance-attribute
matched_text

The actual text that matched the pattern.

line_number instance-attribute
line_number

Line number in the output where pattern was found.

context_before class-attribute instance-attribute
context_before = field(default_factory=list)

Lines of context before the match (up to 2 lines).

context_after class-attribute instance-attribute
context_after = field(default_factory=list)

Lines of context after the match (up to 2 lines).

confidence class-attribute instance-attribute
confidence = 0.8

Confidence in this pattern detection (0.0-1.0).

source class-attribute instance-attribute
source = 'stdout'

Source of the pattern: 'stdout' or 'stderr'.

DetectedPattern dataclass

DetectedPattern(pattern_type, description, frequency=1, success_rate=0.0, last_seen=(lambda: now(tz=UTC))(), context_tags=list(), evidence=list(), confidence=0.5, applications=0, successes_after_application=0, quarantine_status=None, trust_score=None)

A pattern detected from historical outcomes.

Patterns are learned behaviors that can inform future executions. They include both positive patterns (what works) and negative patterns (what to avoid).

v19 Evolution: Extended with optional quarantine_status and trust_score fields for integration with Pattern Quarantine & Trust Scoring features.

Attributes
description instance-attribute
description

Human-readable description of what this pattern represents.

frequency class-attribute instance-attribute
frequency = 1

How often this pattern has been observed.

success_rate class-attribute instance-attribute
success_rate = 0.0

Rate at which this pattern leads to success (0.0-1.0).

last_seen class-attribute instance-attribute
last_seen = field(default_factory=lambda: now(tz=UTC))

When this pattern was last observed (UTC).

context_tags class-attribute instance-attribute
context_tags = field(default_factory=list)

Tags for matching: job types, validation types, error categories.

evidence class-attribute instance-attribute
evidence = field(default_factory=list)

Sheet IDs that contributed to detecting this pattern.

confidence class-attribute instance-attribute
confidence = 0.5

Confidence in this pattern (0.0-1.0). Higher = more reliable.

applications class-attribute instance-attribute
applications = 0

Number of times this pattern was applied (included in prompts).

successes_after_application class-attribute instance-attribute
successes_after_application = 0

Number of success_without_retry outcomes when this pattern was applied.

quarantine_status class-attribute instance-attribute
quarantine_status = None

Quarantine status from global store.

trust_score class-attribute instance-attribute
trust_score = None

Trust score (0.0-1.0) from global store. None if not from global store.

effectiveness_rate property
effectiveness_rate

Compute effectiveness rate from applications and successes.

Returns:

Type Description
float

Effectiveness rate (0.0-1.0). Returns 0.4 (slightly below neutral)

float

when applications < 3 to prefer proven patterns over unproven ones.

float

This prevents unproven patterns from being treated equally with

float

patterns that have demonstrated moderate (50%) success.

effectiveness_weight property
effectiveness_weight

Compute weight for blending effectiveness into relevance scoring.

Uses gradual ramp-up: full weight only after 5 applications. This prevents new patterns from being over-weighted.

Returns:

Type Description
float

Weight (0.0-1.0) based on sample size.

is_quarantined property
is_quarantined

Check if pattern is in quarantine status.

v19 Evolution: Used for quarantine-aware scoring.

is_validated property
is_validated

Check if pattern is in validated status.

v19 Evolution: Used for trust-aware scoring.

Functions
to_prompt_guidance
to_prompt_guidance()

Format this pattern as guidance for prompts.

v19 Evolution: Now includes quarantine/trust context when available.

Returns:

Type Description
str

A concise string suitable for injection into prompts.

Source code in src/marianne/learning/patterns.py
def to_prompt_guidance(self) -> str:
    """Format this pattern as guidance for prompts.

    v19 Evolution: Now includes quarantine/trust context when available.

    Returns:
        A concise string suitable for injection into prompts.
    """
    # v19: Add trust indicator if available
    trust_indicator = ""
    if self.trust_score is not None:
        if self.trust_score >= HIGH_TRUST_THRESHOLD:
            trust_indicator = " [High trust]"
        elif self.trust_score <= LOW_TRUST_THRESHOLD:
            trust_indicator = " [Low trust]"
        else:
            trust_indicator = f" [Trust: {self.trust_score:.0%}]"

    # v19: Add quarantine warning if applicable
    if self.is_quarantined:
        return f"⚠️ [QUARANTINED] {self.description}{trust_indicator}"

    template = _GUIDANCE_TEMPLATES.get(self.pattern_type)
    if template is None:
        return f"{self.description}{trust_indicator}"

    return template.format(
        desc=self.description,
        freq=self.frequency,
        rate=f"{self.success_rate:.0%}",
        trust=trust_indicator,
    )

PatternDetector

PatternDetector(outcomes)

Detects patterns from historical sheet outcomes.

Analyzes a collection of SheetOutcome objects to identify recurring patterns that can inform future executions.

Initialize the pattern detector.

Parameters:

Name Type Description Default
outcomes list[SheetOutcome]

List of historical sheet outcomes to analyze.

required
Source code in src/marianne/learning/patterns.py
def __init__(self, outcomes: list["SheetOutcome"]) -> None:
    """Initialize the pattern detector.

    Args:
        outcomes: List of historical sheet outcomes to analyze.
    """
    self.outcomes = outcomes
Functions
detect_all
detect_all()

Detect all pattern types from outcomes.

Returns:

Type Description
list[DetectedPattern]

List of detected patterns sorted by confidence.

Source code in src/marianne/learning/patterns.py
def detect_all(self) -> list[DetectedPattern]:
    """Detect all pattern types from outcomes.

    Returns:
        List of detected patterns sorted by confidence.
    """
    patterns: list[DetectedPattern] = []

    # Detect various pattern types
    patterns.extend(self._detect_validation_patterns())
    patterns.extend(self._detect_retry_patterns())
    patterns.extend(self._detect_completion_patterns())
    patterns.extend(self._detect_success_patterns())
    patterns.extend(self._detect_confidence_patterns())
    patterns.extend(self._detect_semantic_patterns())
    patterns.extend(self._detect_error_code_patterns())

    # Calculate effectiveness for each pattern from outcomes
    self._calculate_effectiveness(patterns)

    # Sort by confidence (highest first)
    patterns.sort(key=lambda p: p.confidence, reverse=True)

    return patterns
calculate_success_rate staticmethod
calculate_success_rate(outcomes)

Calculate overall success rate from outcomes.

Success is defined as validation_pass_rate == 1.0 (all validations passed). Partial passes (e.g., 0.5) are counted as failures.

Parameters:

Name Type Description Default
outcomes list[SheetOutcome]

List of sheet outcomes.

required

Returns:

Type Description
float

Success rate as a float (0.0-1.0).

Source code in src/marianne/learning/patterns.py
@staticmethod
def calculate_success_rate(outcomes: list["SheetOutcome"]) -> float:
    """Calculate overall success rate from outcomes.

    Success is defined as validation_pass_rate == 1.0 (all validations
    passed). Partial passes (e.g., 0.5) are counted as failures.

    Args:
        outcomes: List of sheet outcomes.

    Returns:
        Success rate as a float (0.0-1.0).
    """
    if not outcomes:
        return 0.0

    successful = sum(1 for outcome in outcomes if outcome.validation_pass_rate == 1.0)
    return successful / len(outcomes)

PatternMatcher

PatternMatcher(patterns)

Matches detected patterns to execution context.

Given a set of detected patterns and a current execution context, finds patterns that are relevant to the current situation.

Initialize the pattern matcher.

Parameters:

Name Type Description Default
patterns list[DetectedPattern]

List of detected patterns to match against.

required
Source code in src/marianne/learning/patterns.py
def __init__(self, patterns: list[DetectedPattern]) -> None:
    """Initialize the pattern matcher.

    Args:
        patterns: List of detected patterns to match against.
    """
    self.patterns = patterns
Functions
match
match(context, limit=5)

Find patterns relevant to the given context.

Parameters:

Name Type Description Default
context dict[str, Any]

Context dict with job_id, sheet_num, validation_types, etc.

required
limit int

Maximum number of patterns to return.

5

Returns:

Type Description
list[DetectedPattern]

List of matching patterns sorted by relevance.

Source code in src/marianne/learning/patterns.py
def match(
    self,
    context: dict[str, Any],
    limit: int = 5,
) -> list[DetectedPattern]:
    """Find patterns relevant to the given context.

    Args:
        context: Context dict with job_id, sheet_num, validation_types, etc.
        limit: Maximum number of patterns to return.

    Returns:
        List of matching patterns sorted by relevance.
    """
    scored_patterns: list[tuple[float, DetectedPattern]] = []

    for pattern in self.patterns:
        score = self._score_relevance(pattern, context)
        if score > 0:
            scored_patterns.append((score, pattern))

    # Sort by score (highest first) and return top N
    scored_patterns.sort(key=lambda x: x[0], reverse=True)
    return [p for _, p in scored_patterns[:limit]]

PatternApplicator

PatternApplicator(patterns)

Applies patterns to modify prompts for better execution.

Takes matched patterns and generates prompt modifications that incorporate learned insights.

Initialize the pattern applicator.

Parameters:

Name Type Description Default
patterns list[DetectedPattern]

List of patterns to apply.

required
Source code in src/marianne/learning/patterns.py
def __init__(self, patterns: list[DetectedPattern]) -> None:
    """Initialize the pattern applicator.

    Args:
        patterns: List of patterns to apply.
    """
    self.patterns = patterns
Functions
generate_prompt_section
generate_prompt_section()

Generate a prompt section from patterns.

Returns:

Type Description
str

Formatted markdown section for prompt injection.

Source code in src/marianne/learning/patterns.py
def generate_prompt_section(self) -> str:
    """Generate a prompt section from patterns.

    Returns:
        Formatted markdown section for prompt injection.
    """
    if not self.patterns:
        return ""

    lines = ["## Learned Patterns", ""]
    lines.append(
        "Based on previous executions, here are relevant insights:"
    )
    lines.append("")

    for i, pattern in enumerate(self.patterns[:MAX_PROMPT_PATTERNS], 1):
        guidance = pattern.to_prompt_guidance()
        lines.append(f"{i}. {guidance}")

    lines.append("")
    lines.append("Consider these patterns when executing this sheet.")
    lines.append("")

    return "\n".join(lines)
get_pattern_descriptions
get_pattern_descriptions()

Get pattern descriptions as a list of strings.

Returns:

Type Description
list[str]

List of pattern guidance strings.

Source code in src/marianne/learning/patterns.py
def get_pattern_descriptions(self) -> list[str]:
    """Get pattern descriptions as a list of strings.

    Returns:
        List of pattern guidance strings.
    """
    return [p.to_prompt_guidance() for p in self.patterns[:MAX_PROMPT_PATTERNS]]

OutputPatternExtractor

OutputPatternExtractor()

Extracts patterns from stdout/stderr output for learning.

Analyzes execution output to detect common failure patterns, error signatures, and other indicators that can inform future executions. This enables Marianne to learn from the raw output of failed executions, not just validation results.

The extractor uses a dictionary of regex patterns to identify common error types like rate limits, import errors, permission denied, etc.

Initialize the output pattern extractor.

Compiles all regex patterns for efficient matching.

Source code in src/marianne/learning/patterns.py
def __init__(self) -> None:
    """Initialize the output pattern extractor.

    Compiles all regex patterns for efficient matching.
    """
    self._compiled_patterns: dict[str, tuple[re.Pattern[str], float]] = {}
    for name, (pattern, confidence) in self.FAILURE_PATTERNS.items():
        self._compiled_patterns[name] = (re.compile(pattern), confidence)
Functions
extract_from_output
extract_from_output(output, source='stdout')

Extract patterns from execution output.

Scans the output text for known failure patterns and returns a list of ExtractedPattern objects with context.

Parameters:

Name Type Description Default
output str

The stdout or stderr text to analyze.

required
source str

Source identifier ('stdout' or 'stderr').

'stdout'

Returns:

Type Description
list[ExtractedPattern]

List of extracted patterns found in the output.

Source code in src/marianne/learning/patterns.py
def extract_from_output(
    self,
    output: str,
    source: str = "stdout",
) -> list[ExtractedPattern]:
    """Extract patterns from execution output.

    Scans the output text for known failure patterns and returns
    a list of ExtractedPattern objects with context.

    Args:
        output: The stdout or stderr text to analyze.
        source: Source identifier ('stdout' or 'stderr').

    Returns:
        List of extracted patterns found in the output.
    """
    if not output or not output.strip():
        return []

    patterns: list[ExtractedPattern] = []
    lines = output.splitlines()

    for name, (compiled_pattern, confidence) in self._compiled_patterns.items():
        for match in compiled_pattern.finditer(output):
            # Calculate line number
            line_num = output[:match.start()].count('\n') + 1

            # Get context lines
            context_before, context_after = self._get_line_context(
                lines, line_num - 1  # Convert to 0-indexed
            )

            patterns.append(
                ExtractedPattern(
                    pattern_name=name,
                    matched_text=match.group(0),
                    line_number=line_num,
                    context_before=context_before,
                    context_after=context_after,
                    confidence=confidence,
                    source=source,
                )
            )

    # Sort by line number to maintain order of occurrence
    patterns.sort(key=lambda p: p.line_number)

    # Deduplicate patterns with same name at same line
    seen: set[tuple[str, int]] = set()
    deduped: list[ExtractedPattern] = []
    for p in patterns:
        key = (p.pattern_name, p.line_number)
        if key not in seen:
            seen.add(key)
            deduped.append(p)

    return deduped
get_pattern_summary
get_pattern_summary(patterns)

Get a summary count of pattern types found.

Parameters:

Name Type Description Default
patterns list[ExtractedPattern]

List of extracted patterns.

required

Returns:

Type Description
dict[str, int]

Dict mapping pattern_name to occurrence count.

Source code in src/marianne/learning/patterns.py
def get_pattern_summary(
    self,
    patterns: list[ExtractedPattern],
) -> dict[str, int]:
    """Get a summary count of pattern types found.

    Args:
        patterns: List of extracted patterns.

    Returns:
        Dict mapping pattern_name to occurrence count.
    """
    summary: dict[str, int] = {}
    for p in patterns:
        summary[p.pattern_name] = summary.get(p.pattern_name, 0) + 1
    return summary