Skip to content

patterns

patterns

Pattern-related mixin for GlobalLearningStore.

This module provides the PatternMixin class that handles all pattern-related operations including: - Recording and updating patterns - Pattern application tracking - Effectiveness and priority calculations - Quarantine lifecycle management - Trust scoring - Success factor analysis (metacognitive reflection) - Pattern discovery broadcasting

Extracted from global_store.py as part of the modularization effort.

Architecture

PatternMixin is now composed from focused sub-mixins, each handling a specific domain of pattern functionality:

  • PatternQueryMixin: Core query operations (get_patterns, get_pattern_by_id, etc.)
  • PatternCrudMixin: Create/update operations and effectiveness calculations
  • PatternQuarantineMixin: Quarantine lifecycle (quarantine, validate, retire)
  • PatternTrustMixin: Trust scoring and auto-apply eligibility
  • PatternSuccessFactorsMixin: Metacognitive reflection (WHY analysis)
  • PatternBroadcastMixin: Real-time pattern discovery broadcasting

This decomposition improves maintainability while preserving the original public API through composition.

Classes

PatternBroadcastMixin

Bases: _BroadcastBase

Mixin providing pattern discovery broadcasting methods.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - _row_to_discovery_event(): For row conversion (from PatternQueryMixin)

Functions
record_pattern_discovery
record_pattern_discovery(pattern_id, pattern_name, pattern_type, job_id, effectiveness_score=1.0, context_tags=None, ttl_seconds=300.0)

Record a pattern discovery for cross-job broadcasting.

When a job discovers a new pattern, it broadcasts the discovery so other concurrent jobs can benefit immediately.

Parameters:

Name Type Description Default
pattern_id str

ID of the discovered pattern.

required
pattern_name str

Human-readable name of the pattern.

required
pattern_type str

Type of pattern (validation_failure, etc.).

required
job_id str

ID of the job that discovered the pattern.

required
effectiveness_score float

Initial effectiveness score (0.0-1.0).

1.0
context_tags list[str] | None

Optional context tags for pattern matching.

None
ttl_seconds float

Time-to-live in seconds (default 5 minutes).

300.0

Returns:

Type Description
str

The discovery event record ID.

Source code in src/marianne/learning/store/patterns_broadcast.py
def record_pattern_discovery(
    self,
    pattern_id: str,
    pattern_name: str,
    pattern_type: str,
    job_id: str,
    effectiveness_score: float = 1.0,
    context_tags: list[str] | None = None,
    ttl_seconds: float = 300.0,
) -> str:
    """Record a pattern discovery for cross-job broadcasting.

    When a job discovers a new pattern, it broadcasts the discovery so
    other concurrent jobs can benefit immediately.

    Args:
        pattern_id: ID of the discovered pattern.
        pattern_name: Human-readable name of the pattern.
        pattern_type: Type of pattern (validation_failure, etc.).
        job_id: ID of the job that discovered the pattern.
        effectiveness_score: Initial effectiveness score (0.0-1.0).
        context_tags: Optional context tags for pattern matching.
        ttl_seconds: Time-to-live in seconds (default 5 minutes).

    Returns:
        The discovery event record ID.
    """
    record_id = str(uuid.uuid4())
    now = datetime.now()
    job_hash = GlobalLearningStoreBase.hash_job(job_id)
    expires_at = now + timedelta(seconds=ttl_seconds)

    with self._get_connection() as conn:
        conn.execute(
            """
            INSERT INTO pattern_discovery_events (
                id, pattern_id, pattern_name, pattern_type,
                source_job_hash, recorded_at, expires_at,
                effectiveness_score, context_tags
            ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
            """,
            (
                record_id,
                pattern_id,
                pattern_name,
                pattern_type,
                job_hash,
                now.isoformat(),
                expires_at.isoformat(),
                effectiveness_score,
                json.dumps(context_tags or []),
            ),
        )

    _logger.info(
        f"Broadcast pattern discovery '{pattern_name}' (type: {pattern_type}), "
        f"expires in {ttl_seconds:.0f}s"
    )
    return record_id
check_recent_pattern_discoveries
check_recent_pattern_discoveries(exclude_job_id=None, pattern_type=None, min_effectiveness=0.0, limit=20)

Check for recent pattern discoveries from other jobs.

Parameters:

Name Type Description Default
exclude_job_id str | None

Optional job ID to exclude (typically self).

None
pattern_type str | None

Optional filter by pattern type.

None
min_effectiveness float

Minimum effectiveness score to include.

0.0
limit int

Maximum number of discoveries to return.

20

Returns:

Type Description
list[PatternDiscoveryEvent]

List of PatternDiscoveryEvent objects from other jobs.

Source code in src/marianne/learning/store/patterns_broadcast.py
def check_recent_pattern_discoveries(
    self,
    exclude_job_id: str | None = None,
    pattern_type: str | None = None,
    min_effectiveness: float = 0.0,
    limit: int = 20,
) -> list[PatternDiscoveryEvent]:
    """Check for recent pattern discoveries from other jobs.

    Args:
        exclude_job_id: Optional job ID to exclude (typically self).
        pattern_type: Optional filter by pattern type.
        min_effectiveness: Minimum effectiveness score to include.
        limit: Maximum number of discoveries to return.

    Returns:
        List of PatternDiscoveryEvent objects from other jobs.
    """
    now = datetime.now()
    exclude_hash = GlobalLearningStoreBase.hash_job(exclude_job_id) if exclude_job_id else None

    with self._get_connection() as conn:
        query = """
            SELECT * FROM pattern_discovery_events
            WHERE expires_at > ?
            AND effectiveness_score >= ?
        """
        params: list[str | float] = [now.isoformat(), min_effectiveness]

        if exclude_hash is not None:
            query += " AND source_job_hash != ?"
            params.append(exclude_hash)

        if pattern_type is not None:
            query += " AND pattern_type = ?"
            params.append(pattern_type)

        query += " ORDER BY recorded_at DESC LIMIT ?"
        params.append(limit)

        cursor = conn.execute(query, params)
        return [self._row_to_discovery_event(row) for row in cursor.fetchall()]
cleanup_expired_pattern_discoveries
cleanup_expired_pattern_discoveries()

Remove expired pattern discovery events.

Returns:

Type Description
int

Number of expired events removed.

Source code in src/marianne/learning/store/patterns_broadcast.py
def cleanup_expired_pattern_discoveries(self) -> int:
    """Remove expired pattern discovery events.

    Returns:
        Number of expired events removed.
    """
    now = datetime.now()

    with self._get_connection() as conn:
        cursor = conn.execute(
            "DELETE FROM pattern_discovery_events WHERE expires_at <= ?",
            (now.isoformat(),),
        )
        deleted = cursor.rowcount

    if deleted > 0:
        _logger.debug("pattern_discovery_events_cleaned", count=deleted)

    return deleted
get_active_pattern_discoveries
get_active_pattern_discoveries(pattern_type=None)

Get all active (unexpired) pattern discovery events.

Parameters:

Name Type Description Default
pattern_type str | None

Optional filter by pattern type.

None

Returns:

Type Description
list[PatternDiscoveryEvent]

List of PatternDiscoveryEvent objects that haven't expired yet.

Source code in src/marianne/learning/store/patterns_broadcast.py
def get_active_pattern_discoveries(
    self,
    pattern_type: str | None = None,
) -> list[PatternDiscoveryEvent]:
    """Get all active (unexpired) pattern discovery events.

    Args:
        pattern_type: Optional filter by pattern type.

    Returns:
        List of PatternDiscoveryEvent objects that haven't expired yet.
    """
    now = datetime.now()

    with self._get_connection() as conn:
        if pattern_type:
            cursor = conn.execute(
                """
                SELECT * FROM pattern_discovery_events
                WHERE expires_at > ? AND pattern_type = ?
                ORDER BY recorded_at DESC
                """,
                (now.isoformat(), pattern_type),
            )
        else:
            cursor = conn.execute(
                """
                SELECT * FROM pattern_discovery_events
                WHERE expires_at > ?
                ORDER BY recorded_at DESC
                """,
                (now.isoformat(),),
            )

        return [self._row_to_discovery_event(row) for row in cursor.fetchall()]

PatternCrudMixin

Mixin providing pattern CRUD and effectiveness methods.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - update_pattern_effectiveness(): For batch recalculation (self-referential)

Functions
record_pattern
record_pattern(pattern_type, pattern_name, description=None, context_tags=None, suggested_action=None, provenance=None, provenance_job_hash=None, provenance_sheet_num=None, instrument_name=None)

Record or update a pattern in the global store.

Resolution order: 1. If a pattern with the same type+name ID exists, upsert it (existing behavior). 2. If no type+name match but a content_hash match exists, merge into the highest-priority existing pattern (incrementing its count, updating last_seen). Soft-deleted matches are reactivated. 3. Otherwise, insert a new pattern.

Parameters:

Name Type Description Default
pattern_type str

The type of pattern (e.g., 'validation_failure').

required
pattern_name str

A unique name for this pattern.

required
description str | None

Human-readable description.

None
context_tags list[str] | None

Tags for matching context.

None
suggested_action str | None

Recommended action for this pattern.

None
provenance PatternProvenance | None

Grouped provenance info (job_hash + sheet_num).

None
provenance_job_hash str | None

Deprecated — use provenance instead.

None
provenance_sheet_num int | None

Deprecated — use provenance instead.

None
instrument_name str | None

Backend instrument that produced this pattern.

None

Returns:

Type Description
str

The pattern ID (may be a merged-to existing ID).

Source code in src/marianne/learning/store/patterns_crud.py
def record_pattern(
    self,
    pattern_type: str,
    pattern_name: str,
    description: str | None = None,
    context_tags: list[str] | None = None,
    suggested_action: str | None = None,
    provenance: PatternProvenance | None = None,
    # Deprecated individual params — use `provenance` instead
    provenance_job_hash: str | None = None,
    provenance_sheet_num: int | None = None,
    instrument_name: str | None = None,
) -> str:
    """Record or update a pattern in the global store.

    Resolution order:
    1. If a pattern with the same type+name ID exists, upsert it (existing behavior).
    2. If no type+name match but a content_hash match exists, merge into the
       highest-priority existing pattern (incrementing its count, updating last_seen).
       Soft-deleted matches are reactivated.
    3. Otherwise, insert a new pattern.

    Args:
        pattern_type: The type of pattern (e.g., 'validation_failure').
        pattern_name: A unique name for this pattern.
        description: Human-readable description.
        context_tags: Tags for matching context.
        suggested_action: Recommended action for this pattern.
        provenance: Grouped provenance info (job_hash + sheet_num).
        provenance_job_hash: Deprecated — use provenance instead.
        provenance_sheet_num: Deprecated — use provenance instead.
        instrument_name: Backend instrument that produced this pattern.

    Returns:
        The pattern ID (may be a merged-to existing ID).
    """
    # Resolve provenance: prefer grouped param, fall back to individual params
    if provenance is not None:
        job_hash = provenance.job_hash
        sheet_num = provenance.sheet_num
    else:
        job_hash = provenance_job_hash
        sheet_num = provenance_sheet_num

    now = datetime.now().isoformat()
    # Normalize for consistent dedup: "File Not Created" -> "file not created"
    normalized_name = " ".join(pattern_name.lower().split())
    pattern_id = hashlib.sha256(
        f"{pattern_type}:{normalized_name}".encode()
    ).hexdigest()[:16]

    content_hash = self._compute_content_hash(
        pattern_type, normalized_name, description,
    )

    with self._get_connection() as conn:
        # Step 1: Try type+name upsert first (existing behavior, highest priority).
        cursor = conn.execute(
            "SELECT id FROM patterns WHERE id = ?", (pattern_id,),
        )
        existing_by_id = cursor.fetchone()

        if existing_by_id:
            # Type+name match — upsert as before, also update content_hash
            # and reactivate if soft-deleted.
            conn.execute(
                """
                UPDATE patterns SET
                    occurrence_count = occurrence_count + 1,
                    last_seen = ?,
                    description = COALESCE(?, description),
                    suggested_action = COALESCE(?, suggested_action),
                    context_tags = ?,
                    content_hash = ?,
                    active = 1
                WHERE id = ?
                """,
                (
                    now,
                    description,
                    suggested_action,
                    json.dumps(context_tags or []),
                    content_hash,
                    pattern_id,
                ),
            )
            return pattern_id

        # Step 2: No type+name match. Check for content_hash merge.
        hash_match = conn.execute(
            """
            SELECT id, COALESCE(active, 1) as active
            FROM patterns
            WHERE content_hash = ?
            ORDER BY priority_score DESC
            LIMIT 1
            """,
            (content_hash,),
        ).fetchone()

        if hash_match:
            merged_id = hash_match["id"]
            is_active = hash_match["active"]

            # Merge into existing pattern: increment count, update last_seen,
            # reactivate if soft-deleted. Preserve original instrument_name.
            conn.execute(
                """
                UPDATE patterns SET
                    occurrence_count = occurrence_count + 1,
                    last_seen = ?,
                    active = 1
                WHERE id = ?
                """,
                (now, merged_id),
            )

            if not is_active:
                _logger.info(
                    "pattern_reactivated_via_hash_merge",
                    merged_id=merged_id,
                    new_type=pattern_type,
                    new_name=pattern_name,
                )
            else:
                _logger.debug(
                    "pattern_merged_via_content_hash",
                    merged_id=merged_id,
                    new_type=pattern_type,
                    new_name=pattern_name,
                    content_hash=content_hash,
                )

            return cast(str, merged_id)

        # Step 3: No match at all — insert new pattern.
        conn.execute(
            """
            INSERT INTO patterns (
                id, pattern_type, pattern_name, description,
                occurrence_count, first_seen, last_seen, last_confirmed,
                led_to_success_count, led_to_failure_count,
                effectiveness_score, variance, suggested_action,
                context_tags, priority_score,
                quarantine_status, provenance_job_hash, provenance_sheet_num,
                trust_score, trust_calculation_date,
                content_hash, instrument_name, active
            ) VALUES (?, ?, ?, ?, 1, ?, ?, ?, 0, 0, 0.5, 0.0, ?, ?, 0.5,
                      ?, ?, ?, 0.5, ?,
                      ?, ?, 1)
            """,
            (
                pattern_id,
                pattern_type,
                pattern_name,
                description,
                now,
                now,
                now,
                suggested_action,
                json.dumps(context_tags or []),
                QuarantineStatus.PENDING.value,
                job_hash,
                sheet_num,
                now,
                content_hash,
                instrument_name,
            ),
        )

    return pattern_id
record_pattern_application
record_pattern_application(pattern_id, execution_id, pattern_led_to_success, retry_count_before=0, retry_count_after=0, application_mode='exploitation', validation_passed=None, grounding_confidence=None)

Record that a pattern was applied to an execution.

Creates the feedback loop for effectiveness tracking. After recording the application, automatically updates effectiveness_score and priority_score.

Parameters:

Name Type Description Default
pattern_id str

The pattern that was applied.

required
execution_id str

The execution it was applied to.

required
pattern_led_to_success bool

Whether applying this pattern led to execution success (validation passed on first attempt).

required
retry_count_before int

Retry count before pattern applied.

0
retry_count_after int

Retry count after pattern applied.

0
application_mode str

'exploration' or 'exploitation'.

'exploitation'
validation_passed bool | None

Whether validation passed on first attempt.

None
grounding_confidence float | None

Grounding confidence (0.0-1.0).

None

Returns:

Type Description
str

The application record ID.

Source code in src/marianne/learning/store/patterns_crud.py
def record_pattern_application(
    self,
    pattern_id: str,
    execution_id: str,
    pattern_led_to_success: bool,
    retry_count_before: int = 0,
    retry_count_after: int = 0,
    application_mode: str = "exploitation",
    validation_passed: bool | None = None,
    grounding_confidence: float | None = None,
) -> str:
    """Record that a pattern was applied to an execution.

    Creates the feedback loop for effectiveness tracking. After recording
    the application, automatically updates effectiveness_score and priority_score.

    Args:
        pattern_id: The pattern that was applied.
        execution_id: The execution it was applied to.
        pattern_led_to_success: Whether applying this pattern led to
            execution success (validation passed on first attempt).
        retry_count_before: Retry count before pattern applied.
        retry_count_after: Retry count after pattern applied.
        application_mode: 'exploration' or 'exploitation'.
        validation_passed: Whether validation passed on first attempt.
        grounding_confidence: Grounding confidence (0.0-1.0).

    Returns:
        The application record ID.
    """
    _ = validation_passed  # Accepted for API compatibility, not yet stored
    app_id = str(uuid.uuid4())
    now = datetime.now()
    now_iso = now.isoformat()

    with self._get_connection() as conn:
        # Guard: verify pattern exists before recording application
        exists = conn.execute(
            "SELECT 1 FROM patterns WHERE id = ?", (pattern_id,)
        ).fetchone()
        if not exists:
            _logger.warning(
                "pattern_application_skipped",
                pattern_id=pattern_id,
                reason="pattern_not_found",
            )
            return app_id

        try:
            conn.execute(
                """
                INSERT INTO pattern_applications (
                    id, pattern_id, execution_id, applied_at,
                    pattern_led_to_success, retry_count_before,
                    retry_count_after, grounding_confidence
                ) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
                """,
                (
                    app_id,
                    pattern_id,
                    execution_id,
                    now_iso,
                    pattern_led_to_success,
                    retry_count_before,
                    retry_count_after,
                    grounding_confidence,
                ),
            )
        except sqlite3.IntegrityError as e:
            # Legacy databases may have FK constraints on
            # pattern_applications that reference executions(id).
            # The v15 migration removes these, but if it hasn't
            # run yet, we catch and log instead of propagating.
            _logger.warning(
                "pattern_application_insert_fk_error",
                pattern_id=pattern_id,
                execution_id=execution_id,
                error=str(e),
            )
            return app_id

        if pattern_led_to_success:
            conn.execute(
                """
                UPDATE patterns SET
                    led_to_success_count = led_to_success_count + 1,
                    last_confirmed = ?
                WHERE id = ?
                """,
                (now_iso, pattern_id),
            )
        else:
            conn.execute(
                """
                UPDATE patterns SET
                    led_to_failure_count = led_to_failure_count + 1
                WHERE id = ?
                """,
                (pattern_id,),
            )

        cursor = conn.execute(
            """
            SELECT led_to_success_count, led_to_failure_count, last_confirmed,
                   occurrence_count, variance
            FROM patterns WHERE id = ?
            """,
            (pattern_id,),
        )
        row = cursor.fetchone()

        if row:
            last_confirmed_raw = row["last_confirmed"]
            last_confirmed = (
                datetime.fromisoformat(last_confirmed_raw)
                if last_confirmed_raw
                else now
            )
            new_effectiveness = self._calculate_effectiveness(
                pattern_id=pattern_id,
                led_to_success_count=row["led_to_success_count"],
                led_to_failure_count=row["led_to_failure_count"],
                last_confirmed=last_confirmed,
                now=now,
                conn=conn,
            )

            new_priority = self._calculate_priority_score(
                effectiveness=new_effectiveness,
                occurrence_count=row["occurrence_count"],
                variance=row["variance"],
            )

            conn.execute(
                """
                UPDATE patterns SET
                    effectiveness_score = ?,
                    priority_score = ?
                WHERE id = ?
                """,
                (new_effectiveness, new_priority, pattern_id),
            )

            _logger.debug(
                "pattern_effectiveness_updated",
                pattern_id=pattern_id,
                effectiveness=round(new_effectiveness, 3),
                priority=round(new_priority, 3),
                mode=application_mode,
            )

    return app_id
soft_delete_pattern
soft_delete_pattern(pattern_id)

Soft-delete a pattern by setting active=0.

Preserves FK integrity — the row remains in the database so pattern_applications referencing it don't violate constraints. Re-recording a soft-deleted pattern reactivates it.

Parameters:

Name Type Description Default
pattern_id str

The pattern to soft-delete.

required

Returns:

Type Description
bool

True if the pattern was found and deactivated, False if not found.

Source code in src/marianne/learning/store/patterns_crud.py
def soft_delete_pattern(self, pattern_id: str) -> bool:
    """Soft-delete a pattern by setting active=0.

    Preserves FK integrity — the row remains in the database so
    pattern_applications referencing it don't violate constraints.
    Re-recording a soft-deleted pattern reactivates it.

    Args:
        pattern_id: The pattern to soft-delete.

    Returns:
        True if the pattern was found and deactivated, False if not found.
    """
    with self._get_connection() as conn:
        cursor = conn.execute(
            "UPDATE patterns SET active = 0 WHERE id = ? AND COALESCE(active, 1) = 1",
            (pattern_id,),
        )
        if cursor.rowcount > 0:
            _logger.info(
                "pattern_soft_deleted",
                pattern_id=pattern_id,
            )
            return True
        return False
update_pattern_effectiveness
update_pattern_effectiveness(pattern_id)

Manually recalculate and update a pattern's effectiveness.

Parameters:

Name Type Description Default
pattern_id str

The pattern to update.

required

Returns:

Type Description
float | None

New effectiveness score, or None if pattern not found.

Source code in src/marianne/learning/store/patterns_crud.py
def update_pattern_effectiveness(
    self,
    pattern_id: str,
) -> float | None:
    """Manually recalculate and update a pattern's effectiveness.

    Args:
        pattern_id: The pattern to update.

    Returns:
        New effectiveness score, or None if pattern not found.
    """
    with self._get_connection() as conn:
        cursor = conn.execute(
            """
            SELECT led_to_success_count, led_to_failure_count, last_confirmed,
                   occurrence_count, variance
            FROM patterns WHERE id = ?
            """,
            (pattern_id,),
        )
        row = cursor.fetchone()

        if not row:
            return None

        now = datetime.now()
        last_confirmed_raw = row["last_confirmed"]
        last_confirmed = (
            datetime.fromisoformat(last_confirmed_raw)
            if last_confirmed_raw
            else now
        )
        new_effectiveness = self._calculate_effectiveness(
            pattern_id=pattern_id,
            led_to_success_count=row["led_to_success_count"],
            led_to_failure_count=row["led_to_failure_count"],
            last_confirmed=last_confirmed,
            now=now,
            conn=conn,
        )

        new_priority = self._calculate_priority_score(
            effectiveness=new_effectiveness,
            occurrence_count=row["occurrence_count"],
            variance=row["variance"],
        )

        conn.execute(
            """
            UPDATE patterns SET
                effectiveness_score = ?,
                priority_score = ?
            WHERE id = ?
            """,
            (new_effectiveness, new_priority, pattern_id),
        )

        _logger.debug(
            "pattern_effectiveness_manual_update",
            pattern_id=pattern_id,
            effectiveness=round(new_effectiveness, 3),
            priority=round(new_priority, 3),
        )
        return new_effectiveness
recalculate_all_pattern_priorities
recalculate_all_pattern_priorities()

Recalculate priorities for all patterns.

Uses batch_connection() to reuse a single SQLite connection across all pattern updates, avoiding N+1 connection overhead.

Returns:

Type Description
int

Number of patterns updated.

Source code in src/marianne/learning/store/patterns_crud.py
def recalculate_all_pattern_priorities(self) -> int:
    """Recalculate priorities for all patterns.

    Uses batch_connection() to reuse a single SQLite connection across
    all pattern updates, avoiding N+1 connection overhead.

    Returns:
        Number of patterns updated.
    """
    with self.batch_connection():
        with self._get_connection() as conn:
            cursor = conn.execute(
                "SELECT id FROM patterns"
            )
            pattern_ids = [row["id"] for row in cursor.fetchall()]

        updated = 0
        for pattern_id in pattern_ids:
            result = self.update_pattern_effectiveness(pattern_id)
            if result is not None:
                updated += 1

    _logger.info("priorities_recalculated", count=updated)
    return updated

PatternLifecycleMixin

Bases: _LifecycleBase

Mixin providing pattern lifecycle and promotion automation.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - get_pattern_by_id(): For pattern lookup (from PatternQueryMixin)

Functions
promote_ready_patterns
promote_ready_patterns()

Auto-promote or quarantine patterns based on effectiveness thresholds.

Queries all PENDING patterns and transitions them based on criteria: - PENDING → ACTIVE: occurrences >= 3 AND effectiveness > 0.60 - PENDING → QUARANTINED: occurrences >= 3 AND effectiveness < 0.35

Also checks ACTIVE patterns for degradation: - ACTIVE → QUARANTINED: effectiveness < 0.30

Returns:

Type Description
dict[str, list[str]]

Dict with keys:

dict[str, list[str]]
  • "promoted": List of pattern IDs promoted to ACTIVE
dict[str, list[str]]
  • "quarantined": List of pattern IDs moved to QUARANTINED
dict[str, list[str]]
  • "degraded": List of pattern IDs degraded from ACTIVE to QUARANTINED
Source code in src/marianne/learning/store/patterns_lifecycle.py
def promote_ready_patterns(self) -> dict[str, list[str]]:
    """Auto-promote or quarantine patterns based on effectiveness thresholds.

    Queries all PENDING patterns and transitions them based on criteria:
    - PENDING → ACTIVE: occurrences >= 3 AND effectiveness > 0.60
    - PENDING → QUARANTINED: occurrences >= 3 AND effectiveness < 0.35

    Also checks ACTIVE patterns for degradation:
    - ACTIVE → QUARANTINED: effectiveness < 0.30

    Returns:
        Dict with keys:
        - "promoted": List of pattern IDs promoted to ACTIVE
        - "quarantined": List of pattern IDs moved to QUARANTINED
        - "degraded": List of pattern IDs degraded from ACTIVE to QUARANTINED
    """
    promoted: list[str] = []
    quarantined: list[str] = []
    degraded: list[str] = []

    with self._get_connection() as conn:
        # Query PENDING patterns with enough data for promotion decision
        cursor = conn.execute(
            """
            SELECT id, effectiveness_score, occurrence_count,
                   led_to_success_count, led_to_failure_count
            FROM patterns
            WHERE quarantine_status = ?
            AND (led_to_success_count + led_to_failure_count) >= ?
            """,
            (QuarantineStatus.PENDING.value, MIN_OCCURRENCES_FOR_PROMOTION),
        )
        pending_patterns = cursor.fetchall()

        for row in pending_patterns:
            pattern_id = row["id"]
            effectiveness = row["effectiveness_score"]
            total_applications = row["led_to_success_count"] + row["led_to_failure_count"]

            if total_applications < MIN_OCCURRENCES_FOR_PROMOTION:
                # Safety check (should be filtered by query, but explicit is better)
                continue

            if effectiveness >= PROMOTION_EFFECTIVENESS_THRESHOLD:
                # Promote to ACTIVE
                conn.execute(
                    """
                    UPDATE patterns SET
                        quarantine_status = ?,
                        validated_at = ?
                    WHERE id = ?
                    """,
                    (QuarantineStatus.VALIDATED.value, datetime.now().isoformat(), pattern_id),
                )
                promoted.append(pattern_id)
                _logger.info(
                    "pattern_lifecycle.promoted",
                    pattern_id=pattern_id,
                    effectiveness=round(effectiveness, 3),
                    occurrences=total_applications,
                    threshold=PROMOTION_EFFECTIVENESS_THRESHOLD,
                )
            elif effectiveness < QUARANTINE_EFFECTIVENESS_THRESHOLD:
                # Quarantine (ineffective)
                conn.execute(
                    """
                    UPDATE patterns SET
                        quarantine_status = ?,
                        quarantined_at = ?
                    WHERE id = ?
                    """,
                    (
                        QuarantineStatus.QUARANTINED.value,
                        datetime.now().isoformat(),
                        pattern_id,
                    ),
                )
                quarantined.append(pattern_id)
                _logger.info(
                    "pattern_lifecycle.quarantined",
                    pattern_id=pattern_id,
                    effectiveness=round(effectiveness, 3),
                    occurrences=total_applications,
                    threshold=QUARANTINE_EFFECTIVENESS_THRESHOLD,
                )

        # Check ACTIVE/VALIDATED patterns for degradation
        cursor = conn.execute(
            """
            SELECT id, effectiveness_score,
                   led_to_success_count, led_to_failure_count
            FROM patterns
            WHERE quarantine_status IN (?, ?)
            AND effectiveness_score < ?
            AND (led_to_success_count + led_to_failure_count) >= ?
            """,
            (
                QuarantineStatus.VALIDATED.value,
                "active",  # Legacy value before v25
                DEGRADATION_THRESHOLD,
                MIN_OCCURRENCES_FOR_PROMOTION,
            ),
        )
        active_patterns = cursor.fetchall()

        for row in active_patterns:
            pattern_id = row["id"]
            effectiveness = row["effectiveness_score"]
            total_applications = row["led_to_success_count"] + row["led_to_failure_count"]

            conn.execute(
                """
                UPDATE patterns SET
                    quarantine_status = ?,
                    quarantined_at = ?
                WHERE id = ?
                """,
                (QuarantineStatus.QUARANTINED.value, datetime.now().isoformat(), pattern_id),
            )
            degraded.append(pattern_id)
            _logger.warning(
                "pattern_lifecycle.degraded",
                pattern_id=pattern_id,
                effectiveness=round(effectiveness, 3),
                occurrences=total_applications,
                threshold=DEGRADATION_THRESHOLD,
            )

    if promoted or quarantined or degraded:
        _logger.info(
            "pattern_lifecycle.promotion_cycle_complete",
            promoted_count=len(promoted),
            quarantined_count=len(quarantined),
            degraded_count=len(degraded),
        )

    return {
        "promoted": promoted,
        "quarantined": quarantined,
        "degraded": degraded,
    }
update_quarantine_status
update_quarantine_status(pattern_id, new_status)

Manually update a pattern's quarantine status.

This is for operator-driven lifecycle transitions that bypass automatic thresholds (e.g., promoting a validated pattern after manual review, or retiring an obsolete pattern).

Parameters:

Name Type Description Default
pattern_id str

The pattern to update.

required
new_status QuarantineStatus

New quarantine status.

required

Returns:

Type Description
bool

True if updated, False if pattern not found.

Source code in src/marianne/learning/store/patterns_lifecycle.py
def update_quarantine_status(
    self,
    pattern_id: str,
    new_status: QuarantineStatus,
) -> bool:
    """Manually update a pattern's quarantine status.

    This is for operator-driven lifecycle transitions that bypass
    automatic thresholds (e.g., promoting a validated pattern after
    manual review, or retiring an obsolete pattern).

    Args:
        pattern_id: The pattern to update.
        new_status: New quarantine status.

    Returns:
        True if updated, False if pattern not found.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        _logger.warning(
            "pattern_lifecycle.update_not_found",
            pattern_id=pattern_id,
        )
        return False

    now = datetime.now().isoformat()

    with self._get_connection() as conn:
        # Update status and appropriate timestamp field
        if new_status == QuarantineStatus.VALIDATED:
            conn.execute(
                """
                UPDATE patterns SET
                    quarantine_status = ?,
                    validated_at = ?
                WHERE id = ?
                """,
                (new_status.value, now, pattern_id),
            )
        elif new_status == QuarantineStatus.QUARANTINED:
            conn.execute(
                """
                UPDATE patterns SET
                    quarantine_status = ?,
                    quarantined_at = ?
                WHERE id = ?
                """,
                (new_status.value, now, pattern_id),
            )
        else:
            # PENDING or RETIRED — no specific timestamp
            conn.execute(
                """
                UPDATE patterns SET
                    quarantine_status = ?
                WHERE id = ?
                """,
                (new_status.value, pattern_id),
            )

    _logger.info(
        "pattern_lifecycle.status_updated",
        pattern_id=pattern_id,
        old_status=pattern.quarantine_status.value,
        new_status=new_status.value,
    )
    return True

PatternQuarantineMixin

Bases: _QuarantineBase

Mixin providing pattern quarantine lifecycle methods.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - get_patterns(): For querying quarantined patterns (from PatternQueryMixin)

Functions
quarantine_pattern
quarantine_pattern(pattern_id, reason=None)

Move a pattern to QUARANTINED status.

Quarantined patterns are excluded from automatic application but retained for investigation and historical reference.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to quarantine.

required
reason str | None

Optional reason for quarantine.

None

Returns:

Type Description
bool

True if pattern was quarantined, False if pattern not found.

Source code in src/marianne/learning/store/patterns_quarantine.py
def quarantine_pattern(
    self,
    pattern_id: str,
    reason: str | None = None,
) -> bool:
    """Move a pattern to QUARANTINED status.

    Quarantined patterns are excluded from automatic application but
    retained for investigation and historical reference.

    Args:
        pattern_id: The pattern ID to quarantine.
        reason: Optional reason for quarantine.

    Returns:
        True if pattern was quarantined, False if pattern not found.
    """
    now = datetime.now().isoformat()

    with self._get_connection() as conn:
        cursor = conn.execute(
            "SELECT id FROM patterns WHERE id = ?",
            (pattern_id,),
        )
        if not cursor.fetchone():
            _logger.warning("pattern_not_found", pattern_id=pattern_id, operation="quarantine")
            return False

        conn.execute(
            """
            UPDATE patterns SET
                quarantine_status = ?,
                quarantined_at = ?,
                quarantine_reason = ?
            WHERE id = ?
            """,
            (
                QuarantineStatus.QUARANTINED.value,
                now,
                reason,
                pattern_id,
            ),
        )

    _logger.info(
        "pattern_quarantined",
        pattern_id=pattern_id,
        reason=reason or "no reason given",
    )
    return True
validate_pattern
validate_pattern(pattern_id)

Move a pattern to VALIDATED status.

Validated patterns are trusted for autonomous application and receive a trust bonus in relevance scoring.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to validate.

required

Returns:

Type Description
bool

True if pattern was validated, False if pattern not found.

Source code in src/marianne/learning/store/patterns_quarantine.py
def validate_pattern(self, pattern_id: str) -> bool:
    """Move a pattern to VALIDATED status.

    Validated patterns are trusted for autonomous application and
    receive a trust bonus in relevance scoring.

    Args:
        pattern_id: The pattern ID to validate.

    Returns:
        True if pattern was validated, False if pattern not found.
    """
    now = datetime.now().isoformat()

    with self._get_connection() as conn:
        cursor = conn.execute(
            "SELECT id FROM patterns WHERE id = ?",
            (pattern_id,),
        )
        if not cursor.fetchone():
            _logger.warning("pattern_not_found", pattern_id=pattern_id, operation="validation")
            return False

        conn.execute(
            """
            UPDATE patterns SET
                quarantine_status = ?,
                validated_at = ?,
                quarantine_reason = NULL
            WHERE id = ?
            """,
            (
                QuarantineStatus.VALIDATED.value,
                now,
                pattern_id,
            ),
        )

    _logger.info("pattern_validated", pattern_id=pattern_id)
    return True
retire_pattern
retire_pattern(pattern_id)

Move a pattern to RETIRED status.

Retired patterns are no longer in active use but retained for historical reference and trend analysis.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to retire.

required

Returns:

Type Description
bool

True if pattern was retired, False if pattern not found.

Source code in src/marianne/learning/store/patterns_quarantine.py
def retire_pattern(self, pattern_id: str) -> bool:
    """Move a pattern to RETIRED status.

    Retired patterns are no longer in active use but retained for
    historical reference and trend analysis.

    Args:
        pattern_id: The pattern ID to retire.

    Returns:
        True if pattern was retired, False if pattern not found.
    """
    with self._get_connection() as conn:
        cursor = conn.execute(
            "SELECT id FROM patterns WHERE id = ?",
            (pattern_id,),
        )
        if not cursor.fetchone():
            _logger.warning("pattern_not_found", pattern_id=pattern_id, operation="retirement")
            return False

        conn.execute(
            """
            UPDATE patterns SET
                quarantine_status = ?
            WHERE id = ?
            """,
            (
                QuarantineStatus.RETIRED.value,
                pattern_id,
            ),
        )

    _logger.info("pattern_retired", pattern_id=pattern_id)
    return True
get_quarantined_patterns
get_quarantined_patterns(limit=50)

Get all patterns currently in QUARANTINED status.

Parameters:

Name Type Description Default
limit int

Maximum number of patterns to return.

50

Returns:

Type Description
list[PatternRecord]

List of quarantined PatternRecord objects.

Source code in src/marianne/learning/store/patterns_quarantine.py
def get_quarantined_patterns(self, limit: int = 50) -> list[PatternRecord]:
    """Get all patterns currently in QUARANTINED status.

    Args:
        limit: Maximum number of patterns to return.

    Returns:
        List of quarantined PatternRecord objects.
    """
    return self.get_patterns(
        quarantine_status=QuarantineStatus.QUARANTINED,
        min_priority=0.0,
        limit=limit,
    )

PatternQueryMixin

Mixin providing pattern query methods for GlobalLearningStore.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection

Functions
get_patterns
get_patterns(pattern_type=None, min_priority=0.01, limit=20, context_tags=None, quarantine_status=None, exclude_quarantined=False, min_trust=None, max_trust=None, include_inactive=False, instrument_name=None, include_universal=True)

Get patterns from the global store.

v19 Evolution: Extended with quarantine and trust filtering options. v14 (cycle 2): Extended with soft-delete and instrument filtering.

Parameters:

Name Type Description Default
pattern_type str | None

Optional filter by pattern type.

None
min_priority float

Minimum priority score to include.

0.01
limit int

Maximum number of patterns to return.

20
context_tags list[str] | None

Optional list of tags for context-based filtering. Patterns match if ANY of their tags match ANY query tag. If None or empty, no tag filtering is applied.

None
quarantine_status QuarantineStatus | None

Filter by specific quarantine status.

None
exclude_quarantined bool

If True, exclude QUARANTINED patterns.

False
min_trust float | None

Filter patterns with trust_score >= this value.

None
max_trust float | None

Filter patterns with trust_score <= this value.

None
include_inactive bool

If True, include soft-deleted patterns (active=0).

False
instrument_name str | None

Filter by instrument name. None means no filter.

None
include_universal bool

If True (default) AND instrument_name is set, also include patterns where instrument_name is NULL (universal patterns applicable to all instruments).

True

Returns:

Type Description
list[PatternRecord]

List of PatternRecord objects sorted by priority.

Source code in src/marianne/learning/store/patterns_query.py
def get_patterns(
    self,
    pattern_type: str | None = None,
    min_priority: float = 0.01,
    limit: int = 20,
    context_tags: list[str] | None = None,
    quarantine_status: QuarantineStatus | None = None,
    exclude_quarantined: bool = False,
    min_trust: float | None = None,
    max_trust: float | None = None,
    include_inactive: bool = False,
    instrument_name: str | None = None,
    include_universal: bool = True,
) -> list[PatternRecord]:
    """Get patterns from the global store.

    v19 Evolution: Extended with quarantine and trust filtering options.
    v14 (cycle 2): Extended with soft-delete and instrument filtering.

    Args:
        pattern_type: Optional filter by pattern type.
        min_priority: Minimum priority score to include.
        limit: Maximum number of patterns to return.
        context_tags: Optional list of tags for context-based filtering.
                     Patterns match if ANY of their tags match ANY query tag.
                     If None or empty, no tag filtering is applied.
        quarantine_status: Filter by specific quarantine status.
        exclude_quarantined: If True, exclude QUARANTINED patterns.
        min_trust: Filter patterns with trust_score >= this value.
        max_trust: Filter patterns with trust_score <= this value.
        include_inactive: If True, include soft-deleted patterns (active=0).
        instrument_name: Filter by instrument name. None means no filter.
        include_universal: If True (default) AND instrument_name is set,
                          also include patterns where instrument_name is NULL
                          (universal patterns applicable to all instruments).

    Returns:
        List of PatternRecord objects sorted by priority.
    """
    with self._get_connection() as conn:
        wb = WhereBuilder()
        wb.add("priority_score >= ?", min_priority)

        # v14: Soft-delete filter — COALESCE handles NULL for pre-v14 rows
        if not include_inactive:
            wb.add("COALESCE(active, 1) = 1")

        if pattern_type:
            wb.add("pattern_type = ?", pattern_type)

        # v14: Instrument name filter — when specified, match exact instrument
        # or (if include_universal) also include NULL (universal) patterns
        if instrument_name is not None:
            if include_universal:
                wb.add("(instrument_name = ? OR instrument_name IS NULL)", instrument_name)
            else:
                wb.add("instrument_name = ?", instrument_name)

        # v19: Quarantine status filtering
        if quarantine_status is not None:
            wb.add("quarantine_status = ?", quarantine_status.value)
        elif exclude_quarantined:
            wb.add("quarantine_status != ?", QuarantineStatus.QUARANTINED.value)

        # v19: Trust score filtering
        if min_trust is not None:
            wb.add("trust_score >= ?", min_trust)
        if max_trust is not None:
            wb.add("trust_score <= ?", max_trust)

        # Context tag filtering: match if ANY pattern tag matches ANY query tag
        # Uses json_each() to iterate over the JSON array stored in context_tags
        if context_tags is not None and len(context_tags) > 0:
            tag_placeholders = ", ".join("?" for _ in context_tags)
            wb.add(
                f"""EXISTS (
                    SELECT 1 FROM json_each(context_tags)
                    WHERE json_each.value IN ({tag_placeholders})
                )""",
                *context_tags,
            )

        where_sql, params = wb.build()
        cursor = conn.execute(
            f"""
            SELECT * FROM patterns
            WHERE {where_sql}
            ORDER BY priority_score DESC
            LIMIT ?
            """,
            (*params, limit),
        )

        return [self._row_to_pattern_record(row) for row in cursor.fetchall()]
get_pattern_by_id
get_pattern_by_id(pattern_id)

Get a single pattern by its ID.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to retrieve.

required

Returns:

Type Description
PatternRecord | None

PatternRecord if found, None otherwise.

Source code in src/marianne/learning/store/patterns_query.py
def get_pattern_by_id(self, pattern_id: str) -> PatternRecord | None:
    """Get a single pattern by its ID.

    Args:
        pattern_id: The pattern ID to retrieve.

    Returns:
        PatternRecord if found, None otherwise.
    """
    with self._get_connection() as conn:
        cursor = conn.execute(
            "SELECT * FROM patterns WHERE id = ?",
            (pattern_id,),
        )
        row = cursor.fetchone()
        if row:
            return self._row_to_pattern_record(row)
        return None
get_pattern_provenance
get_pattern_provenance(pattern_id)

Get provenance information for a pattern.

Returns details about the pattern's origin and lifecycle.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to query.

required

Returns:

Type Description
dict[str, Any] | None

Dict with provenance info, or None if pattern not found.

Source code in src/marianne/learning/store/patterns_query.py
def get_pattern_provenance(self, pattern_id: str) -> dict[str, Any] | None:
    """Get provenance information for a pattern.

    Returns details about the pattern's origin and lifecycle.

    Args:
        pattern_id: The pattern ID to query.

    Returns:
        Dict with provenance info, or None if pattern not found.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        return None

    return {
        "pattern_id": pattern.id,
        "pattern_name": pattern.pattern_name,
        "quarantine_status": pattern.quarantine_status.value,
        "first_seen": pattern.first_seen.isoformat(),
        "last_seen": pattern.last_seen.isoformat(),
        "last_confirmed": pattern.last_confirmed.isoformat(),
        "provenance_job_hash": pattern.provenance_job_hash,
        "provenance_sheet_num": pattern.provenance_sheet_num,
        "quarantined_at": pattern.quarantined_at.isoformat()
        if pattern.quarantined_at
        else None,
        "validated_at": pattern.validated_at.isoformat()
        if pattern.validated_at
        else None,
        "quarantine_reason": pattern.quarantine_reason,
        "trust_score": pattern.trust_score,
        "trust_calculation_date": pattern.trust_calculation_date.isoformat()
        if pattern.trust_calculation_date
        else None,
    }

PatternSuccessFactorsMixin

Bases: _SuccessFactorsBase

Mixin providing pattern success factor analysis methods.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - get_pattern_by_id(): For pattern lookup (from PatternQueryMixin) - _row_to_pattern_record(): For row conversion (from PatternQueryMixin) - analyze_pattern_why(): Self-reference for get_patterns_with_why

Functions
update_success_factors
update_success_factors(pattern_id, validation_types=None, error_categories=None, prior_sheet_status=None, retry_iteration=0, escalation_was_pending=False, grounding_confidence=None)

Update success factors for a pattern based on a successful application.

Captures the WHY behind pattern success — the context conditions that were present when the pattern worked.

Parameters:

Name Type Description Default
pattern_id str

The pattern that succeeded.

required
validation_types list[str] | None

Validation types active (file, regex, artifact, etc.)

None
error_categories list[str] | None

Error categories present (rate_limit, auth, etc.)

None
prior_sheet_status str | None

Status of prior sheet (completed, failed, skipped)

None
retry_iteration int

Which retry this success occurred on (0 = first)

0
escalation_was_pending bool

Whether escalation was pending

False
grounding_confidence float | None

Grounding confidence if external validation present

None

Returns:

Type Description
SuccessFactors | None

Updated SuccessFactors, or None if pattern not found.

Source code in src/marianne/learning/store/patterns_success_factors.py
def update_success_factors(
    self,
    pattern_id: str,
    validation_types: list[str] | None = None,
    error_categories: list[str] | None = None,
    prior_sheet_status: str | None = None,
    retry_iteration: int = 0,
    escalation_was_pending: bool = False,
    grounding_confidence: float | None = None,
) -> SuccessFactors | None:
    """Update success factors for a pattern based on a successful application.

    Captures the WHY behind pattern success — the context conditions
    that were present when the pattern worked.

    Args:
        pattern_id: The pattern that succeeded.
        validation_types: Validation types active (file, regex, artifact, etc.)
        error_categories: Error categories present (rate_limit, auth, etc.)
        prior_sheet_status: Status of prior sheet (completed, failed, skipped)
        retry_iteration: Which retry this success occurred on (0 = first)
        escalation_was_pending: Whether escalation was pending
        grounding_confidence: Grounding confidence if external validation present

    Returns:
        Updated SuccessFactors, or None if pattern not found.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        _logger.warning(
            "update_success_factors_skipped",
            pattern_id=pattern_id,
            reason="pattern_not_found",
        )
        return None

    now = datetime.now()
    time_bucket = SuccessFactors.get_time_bucket(now.hour)

    if pattern.success_factors:
        factors: SuccessFactors = pattern.success_factors
        factors.occurrence_count += 1

        if validation_types:
            existing = set(factors.validation_types)
            existing.update(validation_types)
            factors.validation_types = sorted(existing)

        if error_categories:
            existing_errors = set(factors.error_categories)
            existing_errors.update(error_categories)
            factors.error_categories = sorted(existing_errors)

        if prior_sheet_status:
            factors.prior_sheet_status = prior_sheet_status
        factors.time_of_day_bucket = time_bucket
        factors.retry_iteration = retry_iteration
        factors.escalation_was_pending = escalation_was_pending
        if grounding_confidence is not None:
            factors.grounding_confidence = grounding_confidence

        total = pattern.led_to_success_count + pattern.led_to_failure_count
        if total > 0:
            factors.success_rate = pattern.led_to_success_count / total
    else:
        factors = SuccessFactors(
            validation_types=validation_types or [],
            error_categories=error_categories or [],
            prior_sheet_status=prior_sheet_status,
            time_of_day_bucket=time_bucket,
            retry_iteration=retry_iteration,
            escalation_was_pending=escalation_was_pending,
            grounding_confidence=grounding_confidence,
            occurrence_count=1,
            success_rate=1.0,
        )

    with self._get_connection() as conn:
        conn.execute(
            """
            UPDATE patterns SET
                success_factors = ?,
                success_factors_updated_at = ?
            WHERE id = ?
            """,
            (json.dumps(factors.to_dict()), now.isoformat(), pattern_id),
        )

    _logger.debug(
        f"Updated success factors for {pattern_id}: "
        f"{factors.occurrence_count} observations, "
        f"success_rate={factors.success_rate:.2f}"
    )
    return factors
get_success_factors
get_success_factors(pattern_id)

Get the success factors for a pattern.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to get factors for.

required

Returns:

Type Description
SuccessFactors | None

SuccessFactors if the pattern has captured factors, None otherwise.

Source code in src/marianne/learning/store/patterns_success_factors.py
def get_success_factors(self, pattern_id: str) -> SuccessFactors | None:
    """Get the success factors for a pattern.

    Args:
        pattern_id: The pattern ID to get factors for.

    Returns:
        SuccessFactors if the pattern has captured factors, None otherwise.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        return None
    return pattern.success_factors
analyze_pattern_why
analyze_pattern_why(pattern_id)

Analyze WHY a pattern succeeds with structured explanation.

Parameters:

Name Type Description Default
pattern_id str

The pattern to analyze.

required

Returns:

Type Description
dict[str, Any]

Dictionary with analysis results including factors_summary,

dict[str, Any]

key_conditions, confidence, and recommendations.

Source code in src/marianne/learning/store/patterns_success_factors.py
def analyze_pattern_why(self, pattern_id: str) -> dict[str, Any]:
    """Analyze WHY a pattern succeeds with structured explanation.

    Args:
        pattern_id: The pattern to analyze.

    Returns:
        Dictionary with analysis results including factors_summary,
        key_conditions, confidence, and recommendations.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        return {"error": f"Pattern {pattern_id} not found"}

    result: dict[str, Any] = {
        "pattern_name": pattern.pattern_name,
        "pattern_type": pattern.pattern_type,
        "has_factors": pattern.success_factors is not None,
        "trust_score": pattern.trust_score,
        "effectiveness_score": pattern.effectiveness_score,
    }

    if not pattern.success_factors:
        result["factors_summary"] = "No success factors captured yet"
        result["key_conditions"] = []
        result["confidence"] = 0.0
        result["recommendations"] = [
            "Apply this pattern more times to capture success factors"
        ]
        return result

    factors = pattern.success_factors

    summaries = []
    if factors.validation_types:
        summaries.append(f"validation types: {', '.join(factors.validation_types)}")
    if factors.error_categories:
        summaries.append(f"error categories: {', '.join(factors.error_categories)}")
    if factors.time_of_day_bucket:
        summaries.append(f"typically succeeds in: {factors.time_of_day_bucket}")
    if factors.prior_sheet_status:
        summaries.append(f"prior sheet was: {factors.prior_sheet_status}")

    result["factors_summary"] = "; ".join(summaries) if summaries else "Context captured"

    key_conditions = []
    if factors.grounding_confidence and factors.grounding_confidence > 0.7:
        key_conditions.append(
            f"High grounding confidence ({factors.grounding_confidence:.2f})"
        )
    if factors.retry_iteration == 0:
        key_conditions.append("Succeeds on first attempt")
    elif factors.retry_iteration > 0:
        key_conditions.append(f"Often succeeds after {factors.retry_iteration} retries")
    if factors.validation_types:
        key_conditions.append(f"Works with {len(factors.validation_types)} validation types")
    if not factors.escalation_was_pending:
        key_conditions.append("Succeeds without escalation")

    result["key_conditions"] = key_conditions

    observation_confidence = min(1.0, factors.occurrence_count / 10)
    result["confidence"] = observation_confidence * factors.success_rate

    recommendations = []
    if factors.occurrence_count < 5:
        recommendations.append("Need more observations for reliable analysis")
    if factors.success_rate > 0.8:
        recommendations.append("High confidence pattern - consider for auto-apply")
    if factors.success_rate < 0.5:
        recommendations.append("Low success rate - review pattern relevance")
    if factors.time_of_day_bucket:
        recommendations.append(f"Best applied during {factors.time_of_day_bucket}")

    result["recommendations"] = recommendations
    result["observation_count"] = factors.occurrence_count
    result["success_rate"] = factors.success_rate

    return result
get_patterns_with_why
get_patterns_with_why(min_observations=1, limit=20)

Get patterns with their WHY analysis.

Parameters:

Name Type Description Default
min_observations int

Minimum success factor observations required.

1
limit int

Maximum number of patterns to return.

20

Returns:

Type Description
list[tuple[PatternRecord, dict[str, Any]]]

List of (PatternRecord, analysis_dict) tuples.

Source code in src/marianne/learning/store/patterns_success_factors.py
def get_patterns_with_why(
    self,
    min_observations: int = 1,
    limit: int = 20,
) -> list[tuple[PatternRecord, dict[str, Any]]]:
    """Get patterns with their WHY analysis.

    Args:
        min_observations: Minimum success factor observations required.
        limit: Maximum number of patterns to return.

    Returns:
        List of (PatternRecord, analysis_dict) tuples.
    """
    with self._get_connection() as conn:
        cursor = conn.execute(
            """
            SELECT * FROM patterns
            WHERE success_factors IS NOT NULL
            ORDER BY priority_score DESC, trust_score DESC
            LIMIT ?
            """,
            (limit,),
        )
        rows = cursor.fetchall()

    results = []
    for row in rows:
        pattern = self._row_to_pattern_record(row)
        if (
            pattern.success_factors
            and pattern.success_factors.occurrence_count >= min_observations
        ):
            analysis = self.analyze_pattern_why(pattern.id)
            results.append((pattern, analysis))

    return results

PatternTrustMixin

Bases: _TrustBase

Mixin providing pattern trust scoring methods.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - get_pattern_by_id(): For pattern lookup (from PatternQueryMixin) - get_patterns(): For filtered queries (from PatternQueryMixin) - _row_to_pattern_record(): For row conversion (from PatternQueryMixin)

Functions
calculate_trust_score
calculate_trust_score(pattern_id)

Calculate and update trust score for a pattern.

Trust score formula

trust: float = 0.5 + (success_rate * 0.3) - (failure_rate * 0.4) + (age_factor * 0.2)

Quarantined patterns get a -0.2 penalty. Validated patterns get a +0.1 bonus.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to calculate trust for.

required

Returns:

Type Description
float | None

New trust score (0.0-1.0), or None if pattern not found.

Source code in src/marianne/learning/store/patterns_trust.py
def calculate_trust_score(self, pattern_id: str) -> float | None:
    """Calculate and update trust score for a pattern.

    Trust score formula:
        trust: float = 0.5 + (success_rate * 0.3) - (failure_rate * 0.4) + (age_factor * 0.2)

    Quarantined patterns get a -0.2 penalty.
    Validated patterns get a +0.1 bonus.

    Args:
        pattern_id: The pattern ID to calculate trust for.

    Returns:
        New trust score (0.0-1.0), or None if pattern not found.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        return None

    total = pattern.occurrence_count
    if total == 0:
        total = 1

    success_rate = min(1.0, pattern.led_to_success_count / total)
    failure_rate = min(1.0, pattern.led_to_failure_count / total)

    now = datetime.now()
    days_since_confirmed = (now - pattern.last_confirmed).days
    age_factor = 0.9 ** (days_since_confirmed / 30.0)

    trust: float = 0.5 + (success_rate * 0.3) - (failure_rate * 0.4) + (age_factor * 0.2)

    if pattern.quarantine_status == QuarantineStatus.QUARANTINED:
        trust -= 0.2

    if pattern.quarantine_status == QuarantineStatus.VALIDATED:
        trust += 0.1

    trust = max(0.0, min(1.0, trust))

    with self._get_connection() as conn:
        conn.execute(
            """
            UPDATE patterns SET
                trust_score = ?,
                trust_calculation_date = ?
            WHERE id = ?
            """,
            (trust, now.isoformat(), pattern_id),
        )

    _logger.debug("trust_score_calculated", pattern_id=pattern_id, trust=round(trust, 3))
    return trust
update_trust_score
update_trust_score(pattern_id, delta)

Update trust score by a delta amount.

Parameters:

Name Type Description Default
pattern_id str

The pattern ID to update.

required
delta float

Amount to add to trust score (can be negative).

required

Returns:

Type Description
float | None

New trust score after update, or None if pattern not found.

Source code in src/marianne/learning/store/patterns_trust.py
def update_trust_score(self, pattern_id: str, delta: float) -> float | None:
    """Update trust score by a delta amount.

    Args:
        pattern_id: The pattern ID to update.
        delta: Amount to add to trust score (can be negative).

    Returns:
        New trust score after update, or None if pattern not found.
    """
    pattern = self.get_pattern_by_id(pattern_id)
    if not pattern:
        return None

    new_trust: float = max(0.0, min(1.0, float(pattern.trust_score) + delta))

    with self._get_connection() as conn:
        conn.execute(
            """
            UPDATE patterns SET
                trust_score = ?,
                trust_calculation_date = ?
            WHERE id = ?
            """,
            (new_trust, datetime.now().isoformat(), pattern_id),
        )

    _logger.debug(
        "trust_score_updated",
        pattern_id=pattern_id,
        old_trust=round(float(pattern.trust_score), 3),
        new_trust=round(new_trust, 3),
    )
    return new_trust
get_high_trust_patterns
get_high_trust_patterns(threshold=0.7, limit=50)

Get patterns with high trust scores.

Parameters:

Name Type Description Default
threshold float

Minimum trust score to include.

0.7
limit int

Maximum number of patterns to return.

50

Returns:

Type Description
list[PatternRecord]

List of high-trust PatternRecord objects.

Source code in src/marianne/learning/store/patterns_trust.py
def get_high_trust_patterns(
    self,
    threshold: float = 0.7,
    limit: int = 50,
) -> list[PatternRecord]:
    """Get patterns with high trust scores.

    Args:
        threshold: Minimum trust score to include.
        limit: Maximum number of patterns to return.

    Returns:
        List of high-trust PatternRecord objects.
    """
    return self.get_patterns(
        min_priority=0.0,
        min_trust=threshold,
        limit=limit,
    )
get_low_trust_patterns
get_low_trust_patterns(threshold=0.3, limit=50)

Get patterns with low trust scores.

Parameters:

Name Type Description Default
threshold float

Maximum trust score to include.

0.3
limit int

Maximum number of patterns to return.

50

Returns:

Type Description
list[PatternRecord]

List of low-trust PatternRecord objects.

Source code in src/marianne/learning/store/patterns_trust.py
def get_low_trust_patterns(
    self,
    threshold: float = 0.3,
    limit: int = 50,
) -> list[PatternRecord]:
    """Get patterns with low trust scores.

    Args:
        threshold: Maximum trust score to include.
        limit: Maximum number of patterns to return.

    Returns:
        List of low-trust PatternRecord objects.
    """
    return self.get_patterns(
        min_priority=0.0,
        max_trust=threshold,
        limit=limit,
    )
get_patterns_for_auto_apply
get_patterns_for_auto_apply(trust_threshold=0.85, require_validated=True, limit=3, context_tags=None)

Get patterns eligible for autonomous application.

Criteria: 1. Trust score >= trust_threshold (default 0.85) 2. Quarantine status == VALIDATED (if require_validated=True) 3. Pattern is not retired

Parameters:

Name Type Description Default
trust_threshold float

Minimum trust score for auto-apply.

0.85
require_validated bool

Require VALIDATED quarantine status.

True
limit int

Maximum patterns to return.

3
context_tags list[str] | None

Optional context tags to filter by relevance.

None

Returns:

Type Description
list[PatternRecord]

List of PatternRecord objects eligible for auto-apply.

Source code in src/marianne/learning/store/patterns_trust.py
def get_patterns_for_auto_apply(
    self,
    trust_threshold: float = 0.85,
    require_validated: bool = True,
    limit: int = 3,
    context_tags: list[str] | None = None,
) -> list[PatternRecord]:
    """Get patterns eligible for autonomous application.

    Criteria:
    1. Trust score >= trust_threshold (default 0.85)
    2. Quarantine status == VALIDATED (if require_validated=True)
    3. Pattern is not retired

    Args:
        trust_threshold: Minimum trust score for auto-apply.
        require_validated: Require VALIDATED quarantine status.
        limit: Maximum patterns to return.
        context_tags: Optional context tags to filter by relevance.

    Returns:
        List of PatternRecord objects eligible for auto-apply.
    """
    with self._get_connection() as conn:
        query = """
            SELECT * FROM patterns
            WHERE trust_score >= ?
            AND quarantine_status != 'retired'
        """
        params: list[SQLParam] = [trust_threshold]

        if require_validated:
            query += " AND quarantine_status = 'validated'"

        query += " ORDER BY trust_score DESC, priority_score DESC"
        query += " LIMIT ?"
        params.append(limit * 2)

        cursor = conn.execute(query, params)
        rows = cursor.fetchall()

    patterns = [self._row_to_pattern_record(row) for row in rows]
    if context_tags:
        tags_set = set(context_tags)
        patterns = [
            p for p in patterns
            if tags_set.intersection(set(p.context_tags))
        ]

    patterns = patterns[:limit]

    _logger.debug(
        "auto_apply_patterns_found",
        count=len(patterns),
        threshold=trust_threshold,
        require_validated=require_validated,
    )
    return patterns
recalculate_all_trust_scores
recalculate_all_trust_scores()

Recalculate trust scores for all patterns.

Returns:

Type Description
int

Number of patterns updated.

Source code in src/marianne/learning/store/patterns_trust.py
def recalculate_all_trust_scores(self) -> int:
    """Recalculate trust scores for all patterns.

    Returns:
        Number of patterns updated.
    """
    with self._get_connection() as conn:
        cursor = conn.execute("SELECT id FROM patterns")
        pattern_ids = [row["id"] for row in cursor.fetchall()]

    updated = 0
    for pattern_id in pattern_ids:
        result = self.calculate_trust_score(pattern_id)
        if result is not None:
            updated += 1

    _logger.info("trust_scores_recalculated", count=updated)
    return updated

PatternMixin

Bases: PatternCrudMixin, PatternQueryMixin, PatternQuarantineMixin, PatternTrustMixin, PatternSuccessFactorsMixin, PatternBroadcastMixin, PatternLifecycleMixin

Mixin providing all pattern-related methods for GlobalLearningStore.

This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - _logger: Logger instance for logging

Composed from focused sub-mixins: - PatternQueryMixin: get_patterns, get_pattern_by_id, get_pattern_provenance - PatternCrudMixin: record_pattern, record_pattern_application, effectiveness - PatternQuarantineMixin: quarantine_pattern, validate_pattern, retire_pattern - PatternTrustMixin: calculate_trust_score, get_high/low_trust_patterns - PatternSuccessFactorsMixin: update_success_factors, analyze_pattern_why - PatternBroadcastMixin: record_pattern_discovery, check_recent_pattern_discoveries - PatternLifecycleMixin: promote_ready_patterns, update_quarantine_status (v25)