global_store
global_store
¶
Global learning store - re-exports from modular package.
This module provides backward-compatible imports for the GlobalLearningStore class
and all related models. The implementation has been modularized into the
marianne.learning.store package.
The original monolithic implementation (~5136 LOC) has been split into focused mixins for better maintainability: - models.py: All dataclasses and enums - base.py: Core connection and schema management - patterns.py: Pattern recording and quarantine lifecycle - executions.py: Execution outcome recording - rate_limits.py: Cross-workspace rate limit coordination - drift.py: Effectiveness and epistemic drift detection - escalation.py: Escalation decision recording - budget.py: Exploration budget management
Usage remains unchanged
from marianne.learning.global_store import GlobalLearningStore store = GlobalLearningStore()
Classes¶
BudgetMixin
¶
Mixin providing exploration budget and entropy response functionality.
This mixin provides methods for managing the exploration budget (which controls how much the system explores vs exploits known patterns) and automatic entropy responses (which inject diversity when entropy drops).
The exploration budget uses a floor to ensure diversity never goes to zero, and a ceiling to prevent over-exploration. It adjusts dynamically based on measured entropy: low entropy triggers boosts, healthy entropy allows decay.
Requires the following from the composed class
- _get_connection() -> context manager yielding sqlite3.Connection
Functions¶
get_exploration_budget
¶
Get the most recent exploration budget record.
v23 Evolution: Exploration Budget Maintenance - returns the current exploration budget state for pattern selection modulation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by specific job. |
None
|
Returns:
| Type | Description |
|---|---|
ExplorationBudgetRecord | None
|
The most recent ExplorationBudgetRecord, or None if no budget recorded. |
Source code in src/marianne/learning/store/budget.py
get_exploration_budget_history
¶
Get exploration budget history for analysis.
v23 Evolution: Exploration Budget Maintenance - returns historical budget records for visualization and trend analysis.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by specific job. |
None
|
limit
|
int
|
Maximum number of records to return. |
50
|
Returns:
| Type | Description |
|---|---|
list[ExplorationBudgetRecord]
|
List of ExplorationBudgetRecord objects, most recent first. |
Source code in src/marianne/learning/store/budget.py
update_exploration_budget
¶
update_exploration_budget(job_hash, budget_value, adjustment_type, entropy_at_time=None, adjustment_reason=None, floor=0.05, ceiling=0.5)
Update the exploration budget with floor and ceiling enforcement.
v23 Evolution: Exploration Budget Maintenance - records budget adjustments while enforcing floor (never go to zero) and ceiling limits.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str
|
Hash of the job updating the budget. |
required |
budget_value
|
float
|
Proposed new budget value. |
required |
adjustment_type
|
str
|
Type: 'initial', 'decay', 'boost', 'floor_enforced'. |
required |
entropy_at_time
|
float | None
|
Optional entropy measurement at adjustment time. |
None
|
adjustment_reason
|
str | None
|
Human-readable reason for adjustment. |
None
|
floor
|
float
|
Minimum allowed budget (default 0.05 = 5%). |
0.05
|
ceiling
|
float
|
Maximum allowed budget (default 0.50 = 50%). |
0.5
|
Returns:
| Type | Description |
|---|---|
ExplorationBudgetRecord
|
The new ExplorationBudgetRecord. |
Source code in src/marianne/learning/store/budget.py
203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | |
calculate_budget_adjustment
¶
calculate_budget_adjustment(job_hash, current_entropy, floor=0.05, ceiling=0.5, decay_rate=0.95, boost_amount=0.1, entropy_threshold=0.3, initial_budget=0.15)
Calculate and record the next budget adjustment based on entropy.
v23 Evolution: Exploration Budget Maintenance - implements the core budget adjustment logic: - When entropy < threshold: boost budget by boost_amount - When entropy >= threshold: decay budget by decay_rate - Budget never drops below floor or exceeds ceiling
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str
|
Hash of the job. |
required |
current_entropy
|
float
|
Current pattern entropy (0.0-1.0). |
required |
floor
|
float
|
Minimum budget floor (default 0.05). |
0.05
|
ceiling
|
float
|
Maximum budget ceiling (default 0.50). |
0.5
|
decay_rate
|
float
|
Decay multiplier when entropy healthy (default 0.95). |
0.95
|
boost_amount
|
float
|
Amount to add when entropy low (default 0.10). |
0.1
|
entropy_threshold
|
float
|
Entropy level that triggers boost (default 0.3). |
0.3
|
initial_budget
|
float
|
Starting budget if no history (default 0.15). |
0.15
|
Returns:
| Type | Description |
|---|---|
ExplorationBudgetRecord
|
The new ExplorationBudgetRecord after adjustment. |
Source code in src/marianne/learning/store/budget.py
get_exploration_budget_statistics
¶
Get statistics about exploration budget usage.
v23 Evolution: Exploration Budget Maintenance - provides aggregate statistics for monitoring and reporting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by specific job. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dict with budget statistics: |
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
Source code in src/marianne/learning/store/budget.py
check_entropy_response_needed
¶
Check if an entropy response is needed based on current conditions.
v23 Evolution: Automatic Entropy Response - evaluates whether the current entropy level warrants a diversity injection response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str
|
Hash of the job to check. |
required |
entropy_threshold
|
float
|
Entropy below this triggers response. |
0.3
|
cooldown_seconds
|
int
|
Minimum seconds since last response. |
3600
|
Returns:
| Type | Description |
|---|---|
bool
|
Tuple of (needs_response, current_entropy, reason): |
float | None
|
|
str
|
|
tuple[bool, float | None, str]
|
|
Source code in src/marianne/learning/store/budget.py
427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 | |
trigger_entropy_response
¶
trigger_entropy_response(job_hash='', entropy_at_trigger=0.0, threshold_used=0.0, *, trigger=None, config=None, boost_budget=None, revisit_quarantine=None, max_quarantine_revisits=None, budget_floor=None, budget_ceiling=None, budget_boost_amount=None)
Execute an entropy response by boosting budget and/or revisiting quarantine.
v23 Evolution: Automatic Entropy Response - performs the actual response actions when entropy has dropped below threshold.
Supports two calling conventions
- Positional (legacy):
trigger_entropy_response(job_hash, entropy, threshold, ...) - Bundled (preferred):
trigger_entropy_response(trigger=ctx, config=cfg)
When trigger is supplied, its fields take precedence over positional arguments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str
|
Hash of the job triggering response (legacy positional). |
''
|
entropy_at_trigger
|
float
|
Entropy value that triggered this response (legacy positional). |
0.0
|
threshold_used
|
float
|
The threshold that was crossed (legacy positional). |
0.0
|
trigger
|
EntropyTriggerContext | None
|
Bundled trigger context (preferred over positional args). |
None
|
config
|
EntropyResponseConfig | None
|
Configuration object grouping all response tuning params. Individual keyword arguments override config values when both are provided. |
None
|
boost_budget
|
bool | None
|
Whether to boost exploration budget. |
None
|
revisit_quarantine
|
bool | None
|
Whether to revisit quarantined patterns. |
None
|
max_quarantine_revisits
|
int | None
|
Maximum patterns to revisit. |
None
|
budget_floor
|
float | None
|
Floor for budget enforcement. |
None
|
budget_ceiling
|
float | None
|
Ceiling for budget enforcement. |
None
|
budget_boost_amount
|
float | None
|
Amount to boost budget by. |
None
|
Returns:
| Type | Description |
|---|---|
EntropyResponseRecord
|
The EntropyResponseRecord documenting the response. |
Source code in src/marianne/learning/store/budget.py
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 | |
get_last_entropy_response
¶
Get the most recent entropy response record.
v23 Evolution: Automatic Entropy Response - used for cooldown checking.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by. |
None
|
Returns:
| Type | Description |
|---|---|
EntropyResponseRecord | None
|
The most recent EntropyResponseRecord, or None if none found. |
Source code in src/marianne/learning/store/budget.py
get_entropy_response_history
¶
Get entropy response history for analysis.
v23 Evolution: Automatic Entropy Response - returns historical response records for visualization and trend analysis.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by. |
None
|
limit
|
int
|
Maximum number of records to return. |
50
|
Returns:
| Type | Description |
|---|---|
list[EntropyResponseRecord]
|
List of EntropyResponseRecord objects, most recent first. |
Source code in src/marianne/learning/store/budget.py
get_entropy_response_statistics
¶
Get statistics about entropy responses.
v23 Evolution: Automatic Entropy Response - provides aggregate statistics for monitoring and reporting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dict with response statistics. |
Source code in src/marianne/learning/store/budget.py
calculate_pattern_entropy
¶
Calculate current Shannon entropy of the pattern population.
Queries all patterns with at least one application and computes
Shannon entropy over the application-count distribution. This
reuses the same algorithm as check_entropy_response_needed
but returns a structured result for CLI display and recording.
Returns:
| Type | Description |
|---|---|
PatternEntropyMetrics
|
PatternEntropyMetrics with the current entropy snapshot. |
Source code in src/marianne/learning/store/budget.py
record_pattern_entropy
¶
Persist a pattern entropy snapshot for historical trend analysis.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metrics
|
PatternEntropyMetrics
|
The entropy metrics to record. |
required |
Returns:
| Type | Description |
|---|---|
str
|
The record ID of the persisted snapshot. |
Source code in src/marianne/learning/store/budget.py
get_pattern_entropy_history
¶
Retrieve historical entropy snapshots for trend analysis.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
Maximum number of records to return. |
50
|
Returns:
| Type | Description |
|---|---|
list[PatternEntropyMetrics]
|
List of PatternEntropyMetrics, most recent first. |
Source code in src/marianne/learning/store/budget.py
DriftMetrics
dataclass
¶
DriftMetrics(pattern_id, pattern_name, window_size, effectiveness_before, effectiveness_after, grounding_confidence_avg, drift_magnitude, drift_direction, applications_analyzed, threshold_exceeded=False)
Metrics for pattern effectiveness drift detection.
v12 Evolution: Goal Drift Detection - tracks how pattern effectiveness changes over time to detect drifting patterns that may need attention.
Attributes¶
effectiveness_before
instance-attribute
¶
Effectiveness score in the older window (applications N-2W to N-W).
effectiveness_after
instance-attribute
¶
Effectiveness score in the recent window (applications N-W to N).
grounding_confidence_avg
instance-attribute
¶
Average grounding confidence across all applications in analysis.
drift_magnitude
instance-attribute
¶
Absolute magnitude of drift: |effectiveness_after - effectiveness_before|.
drift_direction
instance-attribute
¶
Direction of drift: 'positive', 'negative', or 'stable'.
applications_analyzed
instance-attribute
¶
Total number of applications analyzed (should be 2 × window_size).
threshold_exceeded
class-attribute
instance-attribute
¶
Whether drift_magnitude exceeds the alert threshold.
DriftMixin
¶
Mixin providing drift detection and pattern retirement functionality.
This mixin provides methods for detecting effectiveness drift and epistemic drift in patterns, as well as automatic retirement of drifting patterns.
Effectiveness drift tracks changes in success rates over time. Epistemic drift tracks changes in confidence/belief levels over time.
Requires the following from the composed class
- _get_connection() -> context manager yielding sqlite3.Connection
Functions¶
calculate_effectiveness_drift
¶
Calculate effectiveness drift for a pattern.
Compares the effectiveness of a pattern in its recent applications vs older applications to detect drift. Patterns that were once effective but are now declining may need investigation.
v12 Evolution: Goal Drift Detection - enables proactive pattern health monitoring.
Formula
drift = effectiveness_after - effectiveness_before drift_magnitude = |drift| weighted_drift = drift_magnitude / avg_grounding_confidence
A positive drift means the pattern is improving, negative means declining. The weighted drift amplifies the signal when grounding confidence is low.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pattern_id
|
str
|
Pattern to analyze. |
required |
window_size
|
int
|
Number of applications per window (default 5). Total applications needed = 2 × window_size. |
5
|
drift_threshold
|
float
|
Threshold for flagging drift (default 0.2 = 20%). |
0.2
|
Returns:
| Type | Description |
|---|---|
DriftMetrics | None
|
DriftMetrics if enough data exists, None otherwise. |
Source code in src/marianne/learning/store/drift.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | |
get_drifting_patterns
¶
Get all patterns with significant drift.
Scans all patterns with enough application history and returns those that exceed the drift threshold.
v12 Evolution: Goal Drift Detection - enables CLI display of drifting patterns for operator review.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
drift_threshold
|
float
|
Minimum drift to include (default 0.2). |
0.2
|
window_size
|
int
|
Applications per window (default 5). |
5
|
limit
|
int
|
Maximum patterns to return. |
20
|
Returns:
| Type | Description |
|---|---|
list[DriftMetrics]
|
List of DriftMetrics for drifting patterns, sorted by |
list[DriftMetrics]
|
drift_magnitude descending. |
Source code in src/marianne/learning/store/drift.py
get_pattern_drift_summary
¶
Get a summary of pattern drift across all patterns.
Provides aggregate statistics for monitoring pattern health.
v12 Evolution: Goal Drift Detection - supports dashboard/reporting.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dict with drift statistics: |
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
Source code in src/marianne/learning/store/drift.py
calculate_epistemic_drift
¶
Calculate epistemic drift for a pattern - how belief/confidence changes over time.
Unlike effectiveness drift (which tracks outcome success rates), epistemic drift tracks how our CONFIDENCE in the pattern changes. This enables detecting belief degradation before effectiveness actually declines.
v21 Evolution: Epistemic Drift Detection - complements effectiveness drift with belief-level monitoring.
Formula
belief_change = avg_confidence_after - avg_confidence_before belief_entropy = std_dev(all_confidence_values) / mean(all_confidence_values) weighted_change = |belief_change| × (1 + belief_entropy)
A positive belief_change means growing confidence, negative means declining. High entropy indicates unstable beliefs (variance in confidence).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pattern_id
|
str
|
Pattern to analyze. |
required |
window_size
|
int
|
Number of applications per window (default 5). Total applications needed = 2 × window_size. |
5
|
drift_threshold
|
float
|
Threshold for flagging epistemic drift (default 0.15 = 15%). |
0.15
|
Returns:
| Type | Description |
|---|---|
EpistemicDriftMetrics | None
|
EpistemicDriftMetrics if enough data exists, None otherwise. |
Source code in src/marianne/learning/store/drift.py
303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 | |
get_epistemic_drifting_patterns
¶
Get all patterns with significant epistemic drift.
Scans all patterns with enough application history and returns those that exceed the epistemic drift threshold.
v21 Evolution: Epistemic Drift Detection - enables CLI display of patterns with changing beliefs for operator review.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
drift_threshold
|
float
|
Minimum epistemic drift to include (default 0.15). |
0.15
|
window_size
|
int
|
Applications per window (default 5). |
5
|
limit
|
int
|
Maximum patterns to return. |
20
|
Returns:
| Type | Description |
|---|---|
list[EpistemicDriftMetrics]
|
List of EpistemicDriftMetrics for drifting patterns, sorted by |
list[EpistemicDriftMetrics]
|
belief_change magnitude descending. |
Source code in src/marianne/learning/store/drift.py
get_epistemic_drift_summary
¶
Get a summary of epistemic drift across all patterns.
Provides aggregate statistics for monitoring belief/confidence health.
v21 Evolution: Epistemic Drift Detection - supports dashboard/reporting.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dict with epistemic drift statistics: |
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
Source code in src/marianne/learning/store/drift.py
retire_drifting_patterns
¶
Retire patterns that are drifting negatively.
Connects the drift detection infrastructure (DriftMetrics) to action. Patterns that have drifted significantly AND in a negative direction are retired by setting their priority_score to 0.
v14 Evolution: Pattern Auto-Retirement - enables automated pattern lifecycle management based on empirical effectiveness drift.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
drift_threshold
|
float
|
Minimum drift magnitude to consider (default 0.2). |
0.2
|
window_size
|
int
|
Applications per window for drift calculation. |
5
|
require_negative_drift
|
bool
|
If True, only retire patterns with negative drift (getting worse). If False, also retire patterns with positive anomalous drift. |
True
|
Returns:
| Type | Description |
|---|---|
list[tuple[str, str, float]]
|
List of (pattern_id, pattern_name, drift_magnitude) tuples for |
list[tuple[str, str, float]]
|
patterns that were retired. |
Source code in src/marianne/learning/store/drift.py
554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 | |
get_retired_patterns
¶
Get patterns that have been retired (priority_score = 0).
Returns patterns that were retired through auto-retirement or manual deprecation, useful for review and potential recovery.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
Maximum number of patterns to return. |
50
|
Returns:
| Type | Description |
|---|---|
list[PatternRecord]
|
List of PatternRecord objects with priority_score = 0. |
Source code in src/marianne/learning/store/drift.py
record_evolution_entry
¶
record_evolution_entry(cycle=None, evolutions_completed=None, evolutions_deferred=None, issue_classes=None, cv_avg=None, implementation_loc=None, test_loc=None, loc_accuracy=None, research_candidates_resolved=0, research_candidates_created=0, notes='', *, entry=None)
Record an evolution cycle entry to the trajectory.
v16 Evolution: Evolution Trajectory Tracking - enables Marianne to track its own evolution history for recursive self-improvement analysis.
Accepts either individual keyword args (backward compatible) or
a bundled EvolutionEntryInput dataclass via the entry kwarg.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cycle
|
int | None
|
Evolution cycle number (e.g., 16 for v16). |
None
|
evolutions_completed
|
int | None
|
Number of evolutions completed in this cycle. |
None
|
evolutions_deferred
|
int | None
|
Number of evolutions deferred in this cycle. |
None
|
issue_classes
|
list[str] | None
|
Issue classes addressed (e.g., ['infrastructure_activation']). |
None
|
cv_avg
|
float | None
|
Average Consciousness Volume of selected evolutions. |
None
|
implementation_loc
|
int | None
|
Total implementation LOC for this cycle. |
None
|
test_loc
|
int | None
|
Total test LOC for this cycle. |
None
|
loc_accuracy
|
float | None
|
LOC estimation accuracy (actual/estimated as ratio). |
None
|
research_candidates_resolved
|
int
|
Number of research candidates resolved. |
0
|
research_candidates_created
|
int
|
Number of new research candidates created. |
0
|
notes
|
str
|
Optional notes about this evolution cycle. |
''
|
entry
|
EvolutionEntryInput | None
|
Bundled input parameters (overrides individual args if provided). |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The ID of the created trajectory entry. |
Raises:
| Type | Description |
|---|---|
IntegrityError
|
If an entry for this cycle already exists. |
Source code in src/marianne/learning/store/drift.py
723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 | |
get_trajectory
¶
Retrieve evolution trajectory history.
v16 Evolution: Evolution Trajectory Tracking - enables analysis of Marianne's evolution history over time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
start_cycle
|
int | None
|
Optional minimum cycle number to include. |
None
|
end_cycle
|
int | None
|
Optional maximum cycle number to include. |
None
|
limit
|
int
|
Maximum number of entries to return (default: 50). |
50
|
Returns:
| Type | Description |
|---|---|
list[EvolutionTrajectoryEntry]
|
List of EvolutionTrajectoryEntry objects, ordered by cycle descending. |
Source code in src/marianne/learning/store/drift.py
get_recurring_issues
¶
Identify recurring issue classes across evolution cycles.
v16 Evolution: Evolution Trajectory Tracking - enables identification of patterns in what types of issues Marianne addresses repeatedly.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_occurrences
|
int
|
Minimum number of occurrences to consider recurring. |
2
|
window_cycles
|
int | None
|
Optional limit to analyze only recent N cycles. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, list[int]]
|
Dict mapping issue class names to list of cycles where they appeared. |
dict[str, list[int]]
|
Only includes issue classes that meet the min_occurrences threshold. |
Source code in src/marianne/learning/store/drift.py
record_evolution_cycle
¶
record_evolution_cycle(cycle_number, candidates_generated, candidates_applied, changes_summary, outcome, learning_snapshot)
Record evolution cycle metadata to trajectory table.
v25 Evolution: Simplified wrapper for recording evolution cycles with essential metadata. Maps to the more detailed record_evolution_entry() internal method.
This method provides a simpler interface focused on what evolution cycles need to record: how many candidates were generated/applied, what changed, and the outcome.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cycle_number
|
int
|
Evolution cycle number (e.g., 25 for v25). |
required |
candidates_generated
|
int
|
Number of evolution candidates generated. |
required |
candidates_applied
|
int
|
Number of candidates successfully applied. |
required |
changes_summary
|
str
|
Git diff summary or description of changes. |
required |
outcome
|
Literal['SUCCESS', 'PARTIAL', 'DEFERRED']
|
Evolution outcome - SUCCESS, PARTIAL, or DEFERRED. |
required |
learning_snapshot
|
dict[str, Any]
|
Dict containing learning metrics at time of cycle. |
required |
Returns:
| Type | Description |
|---|---|
str
|
The ID of the created trajectory entry. |
Raises:
| Type | Description |
|---|---|
IntegrityError
|
If an entry for this cycle already exists. |
Example
store = GlobalLearningStore() entry_id = store.record_evolution_cycle( ... cycle_number=25, ... candidates_generated=5, ... candidates_applied=3, ... changes_summary="Fixed learning export, wired pattern lifecycle", ... outcome="SUCCESS", ... learning_snapshot={ ... "patterns": 6, ... "entropy": 0.000, ... "recovery_rate": 0.0 ... } ... )
Source code in src/marianne/learning/store/drift.py
get_evolution_history
¶
Retrieve last N evolution cycles for context.
v25 Evolution: Simplified wrapper for retrieving evolution history. Maps to the more detailed get_trajectory() method.
This provides a simpler interface focused on getting recent evolution history for context in future cycles.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
last_n
|
int
|
Number of recent cycles to retrieve (default: 10). |
10
|
Returns:
| Type | Description |
|---|---|
list[EvolutionTrajectoryEntry]
|
List of EvolutionTrajectoryEntry objects, ordered by cycle descending |
list[EvolutionTrajectoryEntry]
|
(most recent first). |
Example
store = GlobalLearningStore() recent_cycles = store.get_evolution_history(last_n=5) for entry in recent_cycles: ... print(f"Cycle {entry.cycle}: {entry.evolutions_completed} completed")
Source code in src/marianne/learning/store/drift.py
EntropyResponseRecord
dataclass
¶
EntropyResponseRecord(id, job_hash, recorded_at, entropy_at_trigger, threshold_used, actions_taken, budget_boosted=False, quarantine_revisits=0, patterns_revisited=list())
A record of an automatic entropy response event.
v23 Evolution: Automatic Entropy Response - records when the system automatically responded to low entropy conditions by injecting diversity.
Attributes¶
entropy_at_trigger
instance-attribute
¶
The entropy value that triggered this response.
actions_taken
instance-attribute
¶
List of actions taken: 'budget_boost', 'quarantine_revisit', etc.
budget_boosted
class-attribute
instance-attribute
¶
Whether the exploration budget was boosted.
quarantine_revisits
class-attribute
instance-attribute
¶
Number of quarantined patterns revisited.
patterns_revisited
class-attribute
instance-attribute
¶
IDs of patterns that were marked for revisit.
EpistemicDriftMetrics
dataclass
¶
EpistemicDriftMetrics(pattern_id, pattern_name, window_size, confidence_before, confidence_after, belief_change, belief_entropy, applications_analyzed, threshold_exceeded=False, drift_direction='stable')
Metrics for epistemic drift detection - tracking belief changes about patterns.
v21 Evolution: Epistemic Drift Detection - tracks how confidence/belief in patterns changes over time, complementing effectiveness drift. While effectiveness drift measures outcome changes, epistemic drift measures belief evolution.
This enables detection of belief degradation before effectiveness actually declines.
Attributes¶
confidence_before
instance-attribute
¶
Average grounding confidence in the older window (applications N-2W to N-W).
confidence_after
instance-attribute
¶
Average grounding confidence in the recent window (applications N-W to N).
belief_change
instance-attribute
¶
Change in belief/confidence: confidence_after - confidence_before.
belief_entropy
instance-attribute
¶
Entropy of confidence values (0 = consistent beliefs, 1 = high variance).
applications_analyzed
instance-attribute
¶
Total number of applications analyzed (should be 2 × window_size).
threshold_exceeded
class-attribute
instance-attribute
¶
Whether belief_change magnitude exceeds the alert threshold.
drift_direction
class-attribute
instance-attribute
¶
Direction of belief drift: 'strengthening', 'weakening', or 'stable'.
ErrorRecoveryRecord
dataclass
¶
ErrorRecoveryRecord(id, error_code, suggested_wait, actual_wait, recovery_success, recorded_at, model, time_of_day)
A record of error recovery timing for learning adaptive waits.
EscalationDecisionRecord
dataclass
¶
EscalationDecisionRecord(id, job_hash, sheet_num, confidence, action, guidance, validation_pass_rate, retry_count, outcome_after_action=None, recorded_at=(lambda: now(tz=UTC))(), model=None)
A record of a human/AI escalation decision.
Evolution v11: Escalation Learning Loop - records escalation decisions to learn from feedback over time and potentially suggest actions for similar future escalations.
Attributes¶
confidence
instance-attribute
¶
Aggregate confidence score at time of escalation (0.0-1.0).
validation_pass_rate
instance-attribute
¶
Pass percentage of validations at escalation time.
outcome_after_action
class-attribute
instance-attribute
¶
What happened after the action: success, failed, aborted, skipped.
recorded_at
class-attribute
instance-attribute
¶
When the escalation decision was recorded.
EscalationMixin
¶
Mixin providing escalation decision functionality.
This mixin provides methods for recording and querying escalation decisions. When a sheet triggers escalation and receives a response, the decision is recorded so Marianne can learn from it and potentially suggest similar actions for future escalations with similar contexts.
Requires the following from the composed class
- _get_connection() -> context manager yielding sqlite3.Connection
- hash_job(job_id: str) -> str (static method)
Functions¶
record_escalation_decision
¶
record_escalation_decision(job_id, sheet_num, confidence, action, validation_pass_rate, retry_count, guidance=None, outcome_after_action=None, model=None)
Record an escalation decision for learning.
When a sheet triggers escalation and receives a response from a human or AI handler, this method records the decision so that Marianne can learn from it and potentially suggest similar actions for future escalations with similar contexts.
Evolution v11: Escalation Learning Loop - closes the loop between escalation handlers and learning system.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_id
|
str
|
ID of the job that triggered escalation. |
required |
sheet_num
|
int
|
Sheet number that triggered escalation. |
required |
confidence
|
float
|
Aggregate confidence score at escalation time (0.0-1.0). |
required |
action
|
str
|
Action taken (retry, skip, abort, modify_prompt). |
required |
validation_pass_rate
|
float
|
Pass percentage at escalation time. |
required |
retry_count
|
int
|
Number of retries before escalation. |
required |
guidance
|
str | None
|
Optional guidance/notes from the handler. |
None
|
outcome_after_action
|
str | None
|
What happened after (success, failed, etc.). |
None
|
model
|
str | None
|
Optional model name used for execution. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The escalation decision record ID. |
Source code in src/marianne/learning/store/escalation.py
get_escalation_history
¶
Get historical escalation decisions.
Retrieves past escalation decisions for analysis or display. Can filter by job or action type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_id
|
str | None
|
Optional job ID to filter by. |
None
|
action
|
str | None
|
Optional action type to filter by. |
None
|
limit
|
int
|
Maximum number of records to return. |
20
|
Returns:
| Type | Description |
|---|---|
list[EscalationDecisionRecord]
|
List of EscalationDecisionRecord objects. |
Source code in src/marianne/learning/store/escalation.py
get_similar_escalation
¶
get_similar_escalation(confidence, validation_pass_rate, confidence_tolerance=0.15, pass_rate_tolerance=15.0, limit=5)
Get similar past escalation decisions for guidance.
Finds historical escalations with similar context (confidence and pass rate) to help inform the current escalation decision. Can be used to suggest actions or provide guidance to human operators.
Evolution v11: Escalation Learning Loop - enables pattern-based suggestions for similar escalation contexts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
confidence
|
float
|
Current confidence level (0.0-1.0). |
required |
validation_pass_rate
|
float
|
Current validation pass percentage. |
required |
confidence_tolerance
|
float
|
How much confidence can differ (default 0.15). |
0.15
|
pass_rate_tolerance
|
float
|
How much pass rate can differ (default 15%). |
15.0
|
limit
|
int
|
Maximum number of similar records to return. |
5
|
Returns:
| Type | Description |
|---|---|
list[EscalationDecisionRecord]
|
List of EscalationDecisionRecord from similar past escalations, |
list[EscalationDecisionRecord]
|
ordered by outcome success (successful outcomes first). |
Source code in src/marianne/learning/store/escalation.py
177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
update_escalation_outcome
¶
Update the outcome of an escalation decision.
Called after an escalation action is taken and the result is known. This closes the feedback loop by recording whether the action led to success or failure.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
escalation_id
|
str
|
The escalation record ID to update. |
required |
outcome_after_action
|
str
|
What happened (success, failed, aborted, skipped). |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the record was updated, False if not found. |
Source code in src/marianne/learning/store/escalation.py
EvolutionTrajectoryEntry
dataclass
¶
EvolutionTrajectoryEntry(id, cycle, recorded_at, evolutions_completed, evolutions_deferred, issue_classes, cv_avg, implementation_loc, test_loc, loc_accuracy, research_candidates_resolved=0, research_candidates_created=0, notes='')
A record of a single evolution cycle in Marianne's self-improvement trajectory.
v16 Evolution: Evolution Trajectory Tracking - enables Marianne to track its own evolution history, identifying recurring issue classes and measuring improvement over time.
Attributes¶
evolutions_completed
instance-attribute
¶
Number of evolutions completed in this cycle.
evolutions_deferred
instance-attribute
¶
Number of evolutions deferred in this cycle.
issue_classes
instance-attribute
¶
Issue classes addressed (e.g., 'infrastructure_activation', 'epistemic_drift').
research_candidates_resolved
class-attribute
instance-attribute
¶
Number of research candidates resolved in this cycle.
research_candidates_created
class-attribute
instance-attribute
¶
Number of new research candidates created in this cycle.
ExecutionMixin
¶
Mixin providing execution-related methods for GlobalLearningStore.
This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - _logger: Logger instance for logging - hash_workspace(workspace_path): Static method to hash workspace paths - hash_job(job_name, config_hash): Static method to hash job identifiers
Execution Recording Methods: - record_outcome: Record a sheet execution outcome - _extract_sheet_num: Helper to parse sheet numbers from IDs - _calculate_confidence: Calculate confidence score for an outcome
Execution Statistics Methods: - get_execution_stats: Get aggregate statistics from the global store - get_recent_executions: Get recent execution records
Similar Executions Methods (Learning Activation): - get_similar_executions: Find similar historical executions - get_optimal_execution_window: Analyze optimal times for execution
Workspace Clustering Methods: - get_workspace_cluster: Get cluster ID for a workspace - assign_workspace_cluster: Assign a workspace to a cluster - get_similar_workspaces: Get workspaces in the same cluster
Functions¶
record_outcome
¶
Record a sheet outcome to the global store.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
outcome
|
SheetOutcome
|
The SheetOutcome to record. |
required |
workspace_path
|
Path
|
Path to the workspace for hashing. |
required |
model
|
str | None
|
Optional model name used for execution. |
None
|
error_codes
|
list[str] | None
|
Optional list of error codes encountered. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The execution record ID. |
Source code in src/marianne/learning/store/executions.py
get_execution_stats
¶
Get aggregate statistics from the global store.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary with stats like total_executions, success_rate, etc. |
Source code in src/marianne/learning/store/executions.py
get_recent_executions
¶
Get recent execution records.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
Maximum number of records to return. |
20
|
workspace_hash
|
str | None
|
Optional filter by workspace. |
None
|
Returns:
| Type | Description |
|---|---|
list[ExecutionRecord]
|
List of ExecutionRecord objects. |
Source code in src/marianne/learning/store/executions.py
get_similar_executions
¶
Get similar historical executions for learning.
Learning Activation: Enables querying executions that are similar to the current context, supporting pattern-based decision making.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_hash
|
str | None
|
Optional job hash to filter by similar jobs. |
None
|
workspace_hash
|
str | None
|
Optional workspace hash to filter by. |
None
|
sheet_num
|
int | None
|
Optional sheet number to filter by. |
None
|
limit
|
int
|
Maximum number of records to return. |
10
|
Returns:
| Type | Description |
|---|---|
list[ExecutionRecord]
|
List of ExecutionRecord objects matching the criteria. |
Source code in src/marianne/learning/store/executions.py
get_optimal_execution_window
¶
Analyze historical data to find optimal execution windows.
Learning Activation: Identifies times of day when executions are most likely to succeed, enabling time-aware scheduling recommendations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str | None
|
Optional error code to analyze (e.g., for rate limits). |
None
|
model
|
str | None
|
Optional model to filter by. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dict with optimal window analysis: |
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
dict[str, Any]
|
|
Source code in src/marianne/learning/store/executions.py
337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 | |
get_workspace_cluster
¶
Get the cluster ID for a workspace.
Learning Activation: Supports workspace similarity by grouping workspaces with similar patterns into clusters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
workspace_hash
|
str
|
Hash of the workspace to query. |
required |
Returns:
| Type | Description |
|---|---|
str | None
|
Cluster ID if assigned, None otherwise. |
Source code in src/marianne/learning/store/executions.py
assign_workspace_cluster
¶
Assign a workspace to a cluster.
Learning Activation: Groups workspaces with similar execution patterns for targeted pattern recommendations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
workspace_hash
|
str
|
Hash of the workspace. |
required |
cluster_id
|
str
|
ID of the cluster to assign to. |
required |
Source code in src/marianne/learning/store/executions.py
get_similar_workspaces
¶
Get workspace hashes in the same cluster.
Learning Activation: Enables cross-workspace learning by identifying workspaces with similar patterns.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cluster_id
|
str
|
Cluster ID to query. |
required |
limit
|
int
|
Maximum number of workspace hashes to return. |
10
|
Returns:
| Type | Description |
|---|---|
list[str]
|
List of workspace hashes in the cluster. |
Source code in src/marianne/learning/store/executions.py
record_error_recovery
¶
Record an error recovery for learning adaptive wait times.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str
|
The error code (e.g., 'E103'). |
required |
suggested_wait
|
float
|
The initially suggested wait time in seconds. |
required |
actual_wait
|
float
|
The actual wait time used in seconds. |
required |
recovery_success
|
bool
|
Whether recovery after waiting succeeded. |
required |
model
|
str | None
|
Optional model name. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The recovery record ID. |
Source code in src/marianne/learning/store/executions.py
get_learned_wait_time
¶
Get the learned optimal wait time for an error code.
Analyzes past error recoveries to suggest an adaptive wait time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str
|
The error code to look up. |
required |
model
|
str | None
|
Optional model to filter by. |
None
|
min_samples
|
int
|
Minimum samples required before learning. |
3
|
Returns:
| Type | Description |
|---|---|
float | None
|
Suggested wait time in seconds, or None if not enough data. |
Source code in src/marianne/learning/store/executions.py
get_learned_wait_time_with_fallback
¶
get_learned_wait_time_with_fallback(error_code, static_delay, model=None, min_samples=3, min_confidence=0.7)
Get learned wait time with fallback to static delay and confidence.
This method bridges the global learning store with retry strategies. It returns a delay value along with a confidence score indicating how much to trust the learned value.
Evolution #3: Learned Wait Time Injection - provides the bridge between global_store's cross-workspace learned delays and retry_strategy's blend_historical_delay() method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str
|
The error code to look up (e.g., 'E101'). |
required |
static_delay
|
float
|
Fallback static delay if no learned data available. |
required |
model
|
str | None
|
Optional model to filter by. |
None
|
min_samples
|
int
|
Minimum samples required for learning. |
3
|
min_confidence
|
float
|
Minimum confidence threshold for using learned delay. |
0.7
|
Returns:
| Type | Description |
|---|---|
float
|
Tuple of (delay_seconds, confidence, strategy_name). |
float
|
|
str
|
|
tuple[float, float, str]
|
|
Source code in src/marianne/learning/store/executions.py
get_error_recovery_sample_count
¶
Get the number of successful recovery samples for an error code.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str
|
The error code to query. |
required |
Returns:
| Type | Description |
|---|---|
int
|
Number of successful recovery samples. |
Source code in src/marianne/learning/store/executions.py
ExecutionRecord
dataclass
¶
ExecutionRecord(id, workspace_hash, job_hash, sheet_num, started_at, completed_at, duration_seconds, status, retry_count, success_without_retry, validation_pass_rate, confidence_score, model, error_codes=list())
A record of a sheet execution stored in the global database.
Functions¶
__post_init__
¶
Clamp fields to valid ranges.
Source code in src/marianne/learning/store/models.py
ExplorationBudgetRecord
dataclass
¶
ExplorationBudgetRecord(id, job_hash, recorded_at, budget_value, entropy_at_time, adjustment_type, adjustment_reason=None)
A record of exploration budget state over time.
v23 Evolution: Exploration Budget Maintenance - tracks the dynamic exploration budget to prevent convergence to zero. The budget adjusts based on pattern entropy observations.
Attributes¶
entropy_at_time
instance-attribute
¶
Pattern entropy at time of recording (if measured).
adjustment_type
instance-attribute
¶
Type of adjustment: 'initial', 'decay', 'boost', 'floor_enforced'.
adjustment_reason
class-attribute
instance-attribute
¶
Human-readable reason for this adjustment.
GlobalLearningStore
¶
Bases: PatternMixin, ExecutionMixin, RateLimitMixin, DriftMixin, EscalationMixin, BudgetMixin, PatternLifecycleMixin, GlobalLearningStoreBase
Global learning store combining all mixins.
This is the primary interface for Marianne's cross-workspace learning system. It provides persistent storage for execution outcomes, detected patterns, error recovery data, and learning metrics across all Marianne workspaces.
The class is composed from multiple mixins, each providing domain-specific functionality. The base class (listed last for proper MRO) provides: - SQLite connection management with WAL mode for concurrent access - Schema creation and version migration - Hashing utilities for workspace and job identification
Mixin Capabilities
PatternMixin: - record_pattern(), get_patterns(), get_pattern_by_id() - record_pattern_application(), update_pattern_effectiveness() - quarantine lifecycle: quarantine_pattern(), validate_pattern() - trust scoring, success factor analysis - pattern discovery broadcasting
ExecutionMixin: - record_outcome() for sheet execution outcomes - get_execution_stats(), get_recent_executions() - get_similar_executions() for learning activation - workspace clustering for cross-workspace correlation
RateLimitMixin: - record_rate_limit_event() for cross-workspace coordination - get_recent_rate_limits() to check before API calls - Enables parallel jobs to avoid hitting same limits
DriftMixin: - calculate_drift_metrics() for effectiveness drift - detect_epistemic_drift() for belief-level monitoring - auto_retire_drifting_patterns() for lifecycle management - get_pattern_evolution_trajectory() for historical analysis
EscalationMixin: - record_escalation_decision() when handlers respond - get_similar_escalation() for pattern-based suggestions - Closes the learning loop for escalation handling
BudgetMixin: - get_exploration_budget(), update_exploration_budget() - record_entropy_response() for diversity injection - Dynamic budget with floor/ceiling to prevent over-convergence
Example
from marianne.learning.store import GlobalLearningStore store = GlobalLearningStore()
Record a pattern¶
store.record_pattern( ... pattern_type="rate_limit_recovery", ... pattern_content={"action": "exponential_backoff"}, ... context={"error_code": "E101"}, ... source_job="job-123", ... )
Query execution statistics¶
stats = store.get_execution_stats() print(f"Total executions: {stats['total_executions']}")
Attributes:
| Name | Type | Description |
|---|---|---|
db_path |
Path to the SQLite database file. |
|
_logger |
MarianneLogger
|
Module logger instance for consistent logging. |
Note
The database uses WAL mode for safe concurrent access from multiple Marianne jobs. Schema migrations are applied automatically when the store is initialized.
Source code in src/marianne/learning/store/base.py
GlobalLearningStoreBase
¶
SQLite-based global learning store base class.
Provides persistent storage infrastructure for execution outcomes, detected patterns, and error recovery data across all Marianne workspaces. Uses WAL mode for safe concurrent access.
This base class handles: - Database connection lifecycle - Schema version management - Migration and schema creation - Hashing utilities
Subclasses (via mixins) add domain-specific methods for patterns, executions, rate limits, drift detection, escalation, and budget management.
Attributes:
| Name | Type | Description |
|---|---|---|
db_path |
Path to the SQLite database file. |
|
_logger |
Module logger instance for consistent logging. |
Initialize the global learning store.
Creates the database directory if needed, establishes the connection, and runs any necessary migrations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
db_path
|
Path | None
|
Path to the SQLite database file. Defaults to ~/.marianne/global-learning.db |
None
|
Source code in src/marianne/learning/store/base.py
Functions¶
batch_connection
¶
Reuse a single connection across multiple operations.
While this context manager is active, all _get_connection() calls
will reuse the same connection, avoiding repeated open/close overhead.
The connection is committed once on successful exit or rolled back on error.
Example::
with store.batch_connection():
patterns = store.get_patterns(min_priority=0.01)
for p in patterns:
store.update_trust_score(p.pattern_id, ...)
Yields:
| Type | Description |
|---|---|
Connection
|
The shared sqlite3.Connection instance. |
Source code in src/marianne/learning/store/base.py
close
¶
Close any persistent resources.
No-op: connections are managed per-operation via _get_connection().
This method exists for API compatibility so callers can unconditionally
call store.close() without checking the backend type.
Source code in src/marianne/learning/store/base.py
hash_workspace
staticmethod
¶
Generate a stable hash for a workspace path.
Creates a reproducible 16-character hex hash from the resolved absolute path. This allows pattern matching across sessions while preserving privacy (paths are not stored directly).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
workspace_path
|
Path
|
The absolute path to the workspace. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A hex string hash of the workspace path (16 characters). |
Source code in src/marianne/learning/store/base.py
hash_job
staticmethod
¶
Generate a stable hash for a job.
Creates a reproducible 16-character hex hash from the job name and optional config hash. The config hash enables version-awareness: the same job with different configs will have different hashes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_name
|
str
|
The job name. |
required |
config_hash
|
str | None
|
Optional hash of the job config for versioning. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
A hex string hash of the job (16 characters). |
Source code in src/marianne/learning/store/base.py
clear_all
¶
Clear all data from the global store.
WARNING: This is destructive and should only be used for testing.
Source code in src/marianne/learning/store/base.py
PatternDiscoveryEvent
dataclass
¶
PatternDiscoveryEvent(id, pattern_id, pattern_name, pattern_type, source_job_hash, recorded_at, expires_at, effectiveness_score, context_tags=list())
A pattern discovery event for cross-job broadcasting.
v14 Evolution: Real-time Pattern Broadcasting - enables jobs to share newly discovered patterns with other concurrent jobs, so knowledge propagates across the ecosystem without waiting for aggregation.
PatternMixin
¶
Bases: PatternCrudMixin, PatternQueryMixin, PatternQuarantineMixin, PatternTrustMixin, PatternSuccessFactorsMixin, PatternBroadcastMixin, PatternLifecycleMixin
Mixin providing all pattern-related methods for GlobalLearningStore.
This mixin requires that the composed class provides: - _get_connection(): Context manager yielding sqlite3.Connection - _logger: Logger instance for logging
Composed from focused sub-mixins: - PatternQueryMixin: get_patterns, get_pattern_by_id, get_pattern_provenance - PatternCrudMixin: record_pattern, record_pattern_application, effectiveness - PatternQuarantineMixin: quarantine_pattern, validate_pattern, retire_pattern - PatternTrustMixin: calculate_trust_score, get_high/low_trust_patterns - PatternSuccessFactorsMixin: update_success_factors, analyze_pattern_why - PatternBroadcastMixin: record_pattern_discovery, check_recent_pattern_discoveries - PatternLifecycleMixin: promote_ready_patterns, update_quarantine_status (v25)
PatternRecord
dataclass
¶
PatternRecord(id, pattern_type, pattern_name, description, occurrence_count, first_seen, last_seen, last_confirmed, led_to_success_count, led_to_failure_count, effectiveness_score, variance, suggested_action, context_tags, priority_score, quarantine_status=PENDING, provenance_job_hash=None, provenance_sheet_num=None, quarantined_at=None, validated_at=None, quarantine_reason=None, trust_score=0.5, trust_calculation_date=None, success_factors=None, success_factors_updated_at=None, active=True, content_hash=None, instrument_name=None)
A pattern record stored in the global database.
v19 Evolution: Extended with quarantine_status, provenance, and trust_score fields to support the Pattern Quarantine & Provenance and Pattern Trust Scoring evolutions.
Attributes¶
quarantine_status
class-attribute
instance-attribute
¶
quarantine_status = PENDING
Current status in the quarantine lifecycle.
provenance_job_hash
class-attribute
instance-attribute
¶
Hash of the job that first created this pattern.
provenance_sheet_num
class-attribute
instance-attribute
¶
Sheet number where this pattern was first observed.
quarantined_at
class-attribute
instance-attribute
¶
When the pattern was moved to QUARANTINED status.
validated_at
class-attribute
instance-attribute
¶
When the pattern was moved to VALIDATED status.
quarantine_reason
class-attribute
instance-attribute
¶
Reason for quarantine (if quarantined).
trust_score
class-attribute
instance-attribute
¶
Trust score (0.0-1.0). 0.5 is neutral, >0.7 is high trust.
trust_calculation_date
class-attribute
instance-attribute
¶
When trust_score was last calculated.
success_factors
class-attribute
instance-attribute
¶
WHY this pattern succeeds - captured context conditions and factors.
success_factors_updated_at
class-attribute
instance-attribute
¶
When success_factors were last updated.
active
class-attribute
instance-attribute
¶
Whether this pattern is active (False = soft-deleted).
content_hash
class-attribute
instance-attribute
¶
SHA-256 hash of pattern content for cross-name deduplication.
instrument_name
class-attribute
instance-attribute
¶
Backend instrument that produced this pattern (e.g., 'claude_cli').
Functions¶
__post_init__
¶
Clamp scored fields to valid ranges.
Source code in src/marianne/learning/store/models.py
QuarantineStatus
¶
Bases: str, Enum
Status of a pattern in the quarantine lifecycle.
v19 Evolution: Pattern Quarantine & Provenance - patterns transition through these states as they are validated through successful applications:
- PENDING: New patterns start here, awaiting initial validation
- QUARANTINED: Explicitly marked for review due to concerns
- VALIDATED: Proven effective through repeated successful applications
- RETIRED: No longer active, kept for historical reference
Attributes¶
PENDING
class-attribute
instance-attribute
¶
New pattern awaiting validation through application.
QUARANTINED
class-attribute
instance-attribute
¶
Pattern under review - may have caused issues or needs investigation.
VALIDATED
class-attribute
instance-attribute
¶
Pattern has proven effective and is trusted for autonomous application.
RETIRED
class-attribute
instance-attribute
¶
Pattern no longer in active use, retained for history.
RateLimitEvent
dataclass
¶
A rate limit event for cross-workspace coordination.
Evolution #8: Tracks rate limit events across workspaces so that parallel jobs can coordinate and avoid hitting the same rate limits.
RateLimitMixin
¶
Mixin providing rate limit event functionality.
This mixin provides methods for recording and querying rate limit events across workspaces. When one job hits a rate limit, it records the event so that other parallel jobs can check and avoid hitting the same limit.
Requires the following from the composed class
- _get_connection() -> context manager yielding sqlite3.Connection
- hash_job(job_id: str) -> str (static method)
Functions¶
record_rate_limit_event
¶
Record a rate limit event for cross-workspace coordination.
When one job hits a rate limit, it records the event so that other parallel jobs can check and avoid hitting the same limit.
Evolution #8: Cross-Workspace Circuit Breaker - enables jobs running in different workspaces to share rate limit awareness.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str
|
The error code (e.g., 'E101', 'E102'). |
required |
duration_seconds
|
float
|
Expected rate limit duration in seconds. |
required |
job_id
|
str
|
ID of the job that encountered the rate limit. |
required |
model
|
str | None
|
Optional model name that triggered the limit. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
The rate limit event record ID. |
Source code in src/marianne/learning/store/rate_limits.py
is_rate_limited
¶
Check if there's an active rate limit from another job.
Queries the rate_limit_events table to see if any unexpired rate limit events exist that would affect this job.
Evolution #8: Cross-Workspace Circuit Breaker - allows jobs to check if another job has already hit a rate limit.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
error_code
|
str | None
|
Optional error code to filter by. If None, checks any. |
None
|
model
|
str | None
|
Optional model to filter by. If None, checks any. |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
Tuple of (is_limited: bool, seconds_until_expiry: float | None). |
float | None
|
If is_limited is True, seconds_until_expiry indicates when it clears. |
Source code in src/marianne/learning/store/rate_limits.py
get_active_rate_limits
¶
Get all active (unexpired) rate limit events.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
str | None
|
Optional model to filter by. |
None
|
Returns:
| Type | Description |
|---|---|
list[RateLimitEvent]
|
List of RateLimitEvent objects that haven't expired yet. |
Source code in src/marianne/learning/store/rate_limits.py
cleanup_expired_rate_limits
¶
Remove expired rate limit events from the database.
This is a housekeeping method that can be called periodically to prevent the rate_limit_events table from growing unbounded.
Returns:
| Type | Description |
|---|---|
int
|
Number of expired records deleted. |
Source code in src/marianne/learning/store/rate_limits.py
SuccessFactors
dataclass
¶
SuccessFactors(validation_types=list(), error_categories=list(), prior_sheet_status=None, time_of_day_bucket=None, retry_iteration=0, escalation_was_pending=False, grounding_confidence=None, occurrence_count=1, success_rate=1.0)
Captures WHY a pattern succeeded - the context conditions and factors.
v22 Evolution: Metacognitive Pattern Reflection - patterns now capture not just WHAT happened but WHY it worked. This enables better pattern selection by understanding causality, not just correlation.
Success factors include: - Context conditions: validation types, error categories, execution phase - Timing factors: time of day, retry iteration, prior sheet outcomes - Prerequisite states: prior sheet completion, escalation status
Attributes¶
validation_types
class-attribute
instance-attribute
¶
Validation types that were active: file, regex, artifact, etc.
error_categories
class-attribute
instance-attribute
¶
Error categories present in the execution: rate_limit, auth, validation, etc.
prior_sheet_status
class-attribute
instance-attribute
¶
Status of the immediately prior sheet: completed, failed, skipped.
time_of_day_bucket
class-attribute
instance-attribute
¶
Time bucket: morning, afternoon, evening, night.
retry_iteration
class-attribute
instance-attribute
¶
Which retry attempt this success occurred on (0 = first attempt).
escalation_was_pending
class-attribute
instance-attribute
¶
Whether an escalation was pending when pattern succeeded.
grounding_confidence
class-attribute
instance-attribute
¶
Grounding confidence score if external validation was present.
occurrence_count
class-attribute
instance-attribute
¶
How often this factor combination has been observed.
success_rate
class-attribute
instance-attribute
¶
Success rate when these factors are present (0.0-1.0).
Functions¶
__post_init__
¶
Clamp success_rate and occurrence_count to valid bounds.
to_dict
¶
Serialize to dictionary for JSON storage.
Source code in src/marianne/learning/store/models.py
from_dict
classmethod
¶
Deserialize from dictionary.
Source code in src/marianne/learning/store/models.py
get_time_bucket
staticmethod
¶
Get time bucket for an hour (0-23).
Source code in src/marianne/learning/store/models.py
Functions¶
get_global_store
¶
Get or create the global learning store singleton.
This function provides a convenient singleton accessor for the GlobalLearningStore. It ensures only one store instance exists per database path, avoiding the overhead of creating multiple connections to the same SQLite database.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
db_path
|
Path | None
|
Optional custom database path. If None, uses the default path at ~/.marianne/global-learning.db. |
None
|
Returns:
| Type | Description |
|---|---|
GlobalLearningStore
|
The GlobalLearningStore singleton instance. |
Example
store = get_global_store() # Uses default path store = get_global_store(Path("/custom/path.db")) # Custom path