Capture failed approaches in the QoreLogic Shadow Genome to prevent repeat failures and improve governance learning loops.
Skill Name: qore-meta-track-shadow Version: 1.0 Purpose: Implement QoreLogic Shadow Genome for meta-governance - learn from failures
/qore-meta-track-shadow <context> <attempted_solution> <failure_mode>
Or invoke in conversation:
"Let's track this failed approach in the Shadow Genome..."
Implements QoreLogic's Shadow Genome principle: treating failures as data rather than mistakes. Records failed approaches with context, failure mode analysis, and lessons learned to prevent repetition.
When this skill is invoked, you should:
Collect comprehensive details about the failed approach:
Required Information:
Optional Information:
Use QoreLogic failure taxonomy:
| Failure Mode | Description | Example |
|---|---|---|
| COMPLEXITY_VIOLATION | Violated KISS principle | Added ORM when sqlite3 sufficed |
| PREMATURE_OPTIMIZATION | Optimized without data | Implemented caching before bottleneck proven |
| HALLUCINATION | Claimed capability not validated | "Z3 provides 100% coverage" (unproven) |
| SECURITY_REGRESSION | Introduced vulnerability | Broke keyfile integrity validation |
| SCOPE_CREEP | Added unplanned features | Built features for hypothetical use cases |
| TECHNICAL_DEBT | Quick fix created larger problem | Skipped tests to meet deadline |
| DEPENDENCY_BLOAT | Added unnecessary dependencies | 100MB library for one function |
| ARCHITECTURE_MISMATCH | Solution incompatible with design | Synchronous code in async system |
| VALIDATION_GAP | Insufficient testing/verification | Deployed without integration tests |
| DOCUMENTATION_DRIFT | Docs diverged from reality | Spec claimed features not implemented |
Formulate actionable insight:
Bad Lesson (too vague):
"Be more careful with dependencies"
Good Lesson (actionable):
"Before adding dependencies >10MB, require: (1) measured bottleneck, (2) no stdlib alternative, (3) usage in 3+ places"
Document what worked instead (if known):
Append entry to docs/SHADOW_GENOME.md:
- id: "SG-{sequential_number}"
timestamp: "{ISO 8601 timestamp}"
context: "{What we were building}"
attempted_solution: "{What we tried}"
failure_mode: "{From taxonomy above}"
why_failed: "{Root cause analysis}"
impact: "{Time lost, technical debt created, etc.}"
lesson_learned: "{Actionable principle}"
correct_approach: "{What worked instead}"
related_entries: ["{Links to similar failures if applicable}"]
preventability: "{Could this have been caught earlier? How?}"
After adding entry, analyze for repeated failure modes:
If 3+ entries with same failure_mode:
Example:
"We've added 3 DEPENDENCY_BLOAT entries. Let's add a CI check that fails on dependencies >50MB without explicit justification."
Report the failure learning:
## Shadow Genome Entry: SG-{number}
**Failure Mode:** {mode}
**Impact:** {impact}
**What We Tried:**
{attempted_solution}
**Why It Failed:**
{why_failed}
**Lesson Learned:**
{lesson_learned}
**Moving Forward:**
{correct_approach}
**Prevention:**
{How to avoid this in future}
## Shadow Genome Entry: SG-001
- id: "SG-001"
timestamp: "2025-12-24T15:30:00Z"
context: "Week 2 - Implementing database transaction safety"
attempted_solution: "Use SQLAlchemy ORM for transaction management"
failure_mode: "COMPLEXITY_VIOLATION"
why_failed: "Added 5 new dependencies (50MB), introduced complexity in simple use case. Standard sqlite3 library has built-in transaction support."
impact: "2 hours evaluating, 3 hours testing, 15MB production binary increase"
lesson_learned: "Check stdlib first before adding dependencies. SQLite transactions are simple: conn.execute('BEGIN'), conn.commit(), conn.rollback()"
correct_approach: "Manual transaction wrapper using stdlib sqlite3 - 10 lines of code, zero dependencies"
preventability: "Could have been caught in architecture review with KISS checklist"
Preventive Action Created:
Added rule: "Before adding ORM dependency, require proof that raw SQL is insufficient"
## Shadow Genome Entry: SG-002
- id: "SG-002"
timestamp: "2025-12-26T10:00:00Z"
context: "Week 3 - Validation dataset construction"
attempted_solution: "Implement distributed processing with Celery for dataset generation"
failure_mode: "PREMATURE_OPTIMIZATION"
why_failed: "Dataset is 1000 examples, processes in 10 minutes single-threaded. Celery adds Redis dependency, deployment complexity. No measured bottleneck."
impact: "1 day implementing Celery, 4 hours debugging Redis, added 200MB+ dependencies"
lesson_learned: "Measure first, optimize second. 10 minutes is acceptable for weekly task. Only parallelize if >1 hour or run frequently."
correct_approach: "Simple for-loop with tqdm progress bar. Fast enough, zero complexity."
related_entries: ["SG-001"]
preventability: "Pre-mortem would have identified: 'What if generation is fast enough without optimization?'"
Preventive Action Created:
Added rule: "Performance optimizations require benchmark proving >30min latency or >10 requests/sec load"
## Shadow Genome Entry: SG-003
- id: "SG-003"
timestamp: "2025-12-28T14:00:00Z"
context: "Week 4 - Tier 3 formal verification design"
attempted_solution: "Document that PyVeritas provides 100% verification coverage"
failure_mode: "HALLUCINATION"
why_failed: "PyVeritas research paper states ~80% accuracy. We claimed 100% without validation. Would have mislead users about system capabilities."
impact: "Documentation would have been dishonest, violating Divergence Doctrine"
lesson_learned: "ALWAYS cite exact numbers from source. Never round up. 80% ≠ 100%. Honest limitations build trust."
correct_approach: "Document 'PyVeritas provides ~80% verification accuracy (per original research), complemented by Z3 for critical paths'"
preventability: "Sentinel validation caught this before publication. Need to enforce citation accuracy checks."
Preventive Action Created:
Added rule: "All quantitative claims must have citation with exact number. No rounding 80→100%."
The Shadow Genome lives at: docs/SHADOW_GENOME.md
# Q-DNA Development Shadow Genome
# Failed approaches archived for learning