Ralph Wiggum Loop methodology for autonomous iterative development. Use this skill when executing multi-step feature development, running iterative build-test-fix cycles, managing task prioritization during implementation, or when the user says "ralph", "loop", or asks for autonomous/iterative execution of a plan. Provides the structured loop pattern for high-quality agent-driven code delivery.
The Ralph Loop is an autonomous AI coding pattern where the agent works through tasks iteratively in a structured loop. Each iteration: pick one task, implement it, validate with feedback loops, commit, and repeat. Named after Ralph Wiggum — it looks simple, but it ships code.
"Software is clay on the pottery wheel — when something fails, it returns to the loop for refinement."
The agent CHOOSES which task to work on next based on priority, not necessarily the first in the list. The loop runs until all tasks pass or the iteration cap is reached.
┌─────────────────────────────────────────┐
│ RALPH LOOP START │
├─────────────────────────────────────────┤
│ │
│ 1. READ scope (plan/PRD/task list) │
│ 2. READ progress.md (if exists) │
│ 3. CHOOSE highest priority task │
│ 4. SEARCH codebase before implementing │
│ 5. IMPLEMENT one thing only │
│ 6. RUN feedback loops (build/test) │
│ 7. FIX if feedback loops fail │
│ 8. COMMIT with descriptive message │
│ 9. UPDATE progress tracking │
│ 10. LOOP or STOP │
│ │
└─────────────────────────────────────────┘
The most critical constraint. Each iteration implements ONE focused change.
Why: Context windows degrade with size ("context rot"). Smaller focused tasks keep the agent sharp.
NEVER assume something doesn't exist. Always search the codebase before implementing.
rg / grep for existing implementationsWhy: Non-deterministic behavior can cause duplicate implementations. Searching is cheap; rewriting is expensive.
Before committing, ALL feedback loops must pass:
| Feedback Loop | What It Catches | Priority |
|---|---|---|
| Build/Compile | Syntax errors, type mismatches | MUST pass |
| Tests | Broken logic, regressions | MUST pass |
| Linting | Code style, potential bugs | SHOULD pass |
| Type checking | Type safety violations | MUST pass |
DO NOT commit if any mandatory feedback loop fails. Fix issues first, then commit.
"Great programmers don't trust their own code. They build automations and checks to verify what they ship."
For Tailnote (Xcode/Swift):
xcodebuild must succeedBetween iterations, maintain a progress file so the next cycle has full context.
What to track:
Keep entries concise. Sacrifice grammar for brevity — this is for the agent, not humans.
Cleanup: Progress files are session-specific. Delete after the sprint is complete.
Without guidance, agents pick easy wins. Force the hard stuff first:
| Priority | Task Type | Why |
|---|---|---|
| 1 (HIGH) | Architectural decisions & core abstractions | Cascades through entire codebase |
| 2 (HIGH) | Integration points between modules | Reveals incompatibilities early |
| 3 (HIGH) | Unknown unknowns / spike work | Better to fail fast than fail late |
| 4 (MED) | Standard features & implementation | Solid foundation makes these easy |
| 5 (LOW) | Polish, cleanup, quick wins | Can be done anytime |
For Tailnote: SSH/SFTP integration (risky) before UI polish (easy). Core data flow before edge case handling.
Instructions compete with existing code patterns. The agent sees two truth sources: your prompt vs. thousands of lines of existing code. The codebase typically prevails.
Implications:
When something fails, DON'T just "try harder." Ask:
"What capability is missing, and how do we make it legible and enforceable?"
Run once, watch, intervene. Best for:
Run in a loop with iteration cap. Best for:
Progression: Always start HITL → build confidence → transition to AFK.
Iteration Caps: Always limit iterations (5-10 for small tasks, 30-50 for larger ones). Never run infinite loops with probabilistic systems.
Read plan → Pick task → Implement → Test → Commit → Repeat
Read coverage report → Find uncovered paths → Write tests →
Run coverage → Update report → Repeat until target reached
Scan for code smells → Fix ONE issue → Verify build →
Document change → Repeat
Run linter → Fix ONE error → Re-run linter → Verify → Repeat
Agent writes code → Agent reviews own changes →
Agent requests peer agent review → Agent responds to feedback →
Iterate until all reviewers satisfied
When implementing a new Tailnote feature:
docs/, README.md, existing module code)progress.md for prior context (if exists)This is a production iOS app. Quality expectations:
- Protocol-driven services for testability
- Typed errors per domain (SSHError, FileError, etc.)
- Parse SSH/SFTP responses into strict Swift types at boundaries
- Views are pure renderers — no business logic
- Every public API has a clear contract
- Mock implementations exist for every service protocol
| Anti-Pattern | Why It Fails | Better Approach |
|---|---|---|
| Implementing multiple features per loop | Context rot, lower quality | One thing per loop |
| Skipping feedback loops to go faster | Broken code compounds | Always validate before commit |
| Starting with easy tasks | Hard tasks get harder over time | Risky tasks first |
| Not tracking progress | Agent wastes tokens re-exploring | Maintain progress.md |
| Assuming code doesn't exist | Creates duplicates | Always search first |
| Running infinite AFK loops | Probabilistic systems drift | Set iteration caps |
| Placeholder/minimal implementations | Tech debt compounds instantly | Full implementations or nothing |
| Ignoring existing patterns | Fighting the codebase | Read and follow existing conventions |
When you wake up to a broken codebase:
git log to see what happened, git diff to see damagegit reset?"Any problem created by AI can be resolved through a different series of prompts."
When running a Ralph loop, the agent prompt should include:
Context files: [plan file] [progress file] [relevant specs]
1. Read the plan and progress files
2. Decide which task has the HIGHEST priority — not necessarily the first
3. SEARCH the codebase before implementing anything
4. Implement ONE task fully (no placeholders, no shortcuts)
5. Run ALL feedback loops (build, test, lint)
6. Fix any failures before committing
7. Commit with a descriptive message
8. Update progress tracking
9. If all tasks complete, STOP. Otherwise, continue to next task.
DO NOT implement placeholder or minimal implementations.
DO NOT assume something doesn't exist without searching.
DO NOT commit if feedback loops fail.
DO NOT work on more than one task per iteration.