Fully autonomous job application pipeline. Discovers new jobs, ranks them, and applies to the top matches. Chains /find-jobs → /mass-apply with jobs.db as the bridge. This is the skill for cron automation. Triggers on: 'auto apply', 'start applying', 'run the pipeline', 'apply to new jobs'.
Read this file fresh every time. Also read /find-jobs/SKILL.md and /mass-apply/SKILL.md before executing. Do not rely on memory.
Fully autonomous pipeline: discover → rank → apply → track.
google-chrome-stable --remote-debugging-port=9223 --user-data-dir=$HOME/.config/google-chrome-debug &agent-browser CLI on PATHjobs.db initialized: python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py initform_profile.md exists.env exists with credentialsYou MUST send progress updates throughout execution. Do NOT wait until the end to report. The user needs to see what's happening in real time.
How: After completing each major step, IMMEDIATELY output a status message before continuing to the next step. Do not batch all output to the end. Each message below should be a separate response/output:
Pipeline start (before doing anything else):
🚀 Auto-apply starting — {date} {time}
Checking Chrome connection and sources...
After discovery (after find-jobs completes):
📋 Discovery complete — {N} new jobs found
Top picks:
1. {Company} — {Role} (score: {N})
2. {Company} — {Role} (score: {N})
...
Starting applications...
Before each application:
📝 [{N}/{total}] Applying: {Company} — {Role} ({platform})
After each application:
✅ Submitted: {Company} — {Role}
or
⏭️ Skipped: {Company} — {reason}
Any blocker (IMMEDIATELY, don't continue silently):
🚨 BLOCKER: {description}
Action needed: {what the user should do}
Waiting for resolution...
Pipeline complete:
📊 Auto-apply complete — {date} {time}
Applied: {N} | Skipped: {N} | Failed: {N}
Companies: {list}
Next run: {time}
When running in an isolated or cron session, you do NOT have implicit Discord access. You MUST call:
sessions_send(
sessionKey="agent:main:discord:channel:1485007288799596794",
message="<your status update>"
)
Call after EVERY step above. Each status = one sessions_send call. Do not batch.
If sessions_send errors, STOP the pipeline. Do not continue silently.
For blockers needing manual intervention, include <@461881815959994368> in the message to ping Simon.
If already running inside the Discord channel session (you have deliveryContext with channel: "discord"), your text output is automatically delivered. Just output the status messages as normal text.
Read and execute /find-jobs skill:
Read ~/.openclaw/workspace/.claude/skills/find-jobs/SKILL.md and ALL source guides in sources/
Execute it — scrape all sources, insert into jobs.db
Connection: agent-browser --cdp 9223
Recompute scores (recency decays daily):
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py rescore
Cap at 5 jobs per run to control token costs (~100-150k tokens per job across tailoring + form-filling). Two runs per day = 10 jobs/day max.
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py top --limit 5
Extract the URLs from the output. If 0 jobs found, report "No new jobs to apply to" and stop.
Before sending jobs to mass-apply, fetch each JD and check for sponsorship blockers. This saves time by rejecting early — before tailoring starts.
For each top 5 job URL:
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py mark --url "{url}" --status rejected
Remove from the batch. Report: "Rejected: {Company} — sponsorship blocker in JD"python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py list --source linkedin --status applied --days 1 --limit 100
Count results. If >= 20 LinkedIn Easy Apply today, exclude LinkedIn Easy Apply URLs from the batch.
Read and execute /mass-apply skill with the top job URLs:
Read ~/.openclaw/workspace/.claude/skills/mass-apply/SKILL.md and ALL platform guides in platforms/
Execute it with the URLs from Step 3
After each job in the mass-apply batch:
If submitted:
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py mark --url "{url}" --status applied
If skipped (sponsorship blocker, CAPTCHA timeout, form error):
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py mark --url "{url}" --status skipped
If rejected (auto-reject by validation):
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py mark --url "{url}" --status rejected
If posting expired (404 / "no longer available"):
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py mark --url "{url}" --status closed
python3 ~/work/personal/jobs/.claude/scripts/jobs_db.py stats
Report to user:
AUTO-APPLY RUN — {date} {time}
Discovered: {N} new jobs across {sources}
Applied: {N} ({list companies})
Skipped: {N} ({reasons})
Rejected: {N} ({reasons})
DB total: {N} jobs | Applied: {N} | New: {N}
Next run: {next_cron_time}
agent-browser close or any command that kills the Chrome process. Simon's Chrome has logged-in sessions across many sites. Killing it destroys all state. Only close individual tabs after successful submission.Runs at 8am and 6pm Pacific daily. Configured in ~/.openclaw/cron/jobs.json.
Between runs, jobs accumulate in the database from all sources. Each run processes the freshest, highest-scoring matches.