Automatically find and apply to jobs that match the user's resume using the resumex.dev API, web search, and browser automation. Use this skill whenever the user asks to: search for jobs, find job matches, auto-apply to jobs, apply for jobs automatically, find jobs based on my resume, or any variant of job hunting, job searching, or job application automation. Fetches the user's full resume data from resumex.dev, extracts skills/experience/preferences, searches for matching jobs on the web, presents an approval list, then automatically fills and submits applications using browser control. Logs all applications to the user's resumex.dev job tracker. Always use this skill when the user mentions "apply to jobs", "job search", "find jobs for me", "auto apply", or "job hunting" — even if they haven't explicitly mentioned resumex.
This skill connects to the user's resumex.dev account, reads their resume data, matches them to relevant jobs via web search, presents an approval list, then automatically applies to approved jobs using browser automation — filling forms, answering screening questions, and submitting applications. All applications are logged to the resumex.dev job tracker.
Architecture: ResumeX stores resume data and the job tracker. OpenClaw's built-in AI does all the thinking and text generation (cover letters, screening answers, scoring). Web search finds jobs. Browser tool fills and submits applications.
user_preferences.jsonremembers extra info between sessions.
No third-party AI API keys required. This skill uses only OpenClaw's built-in LLM for all AI tasks (cover letter drafting, screening question answers, job scoring). The only external API key needed is
RESUMEX_API_KEY.
| Variable | Required | Description |
|---|---|---|
RESUMEX_API_KEY | ✅ Required | API key from resumex.dev → Dashboard → Resumex API |
JOB_SEARCH_LOCATION | Optional | Override city/country for job search |
JOB_TYPE | Optional | full-time | part-time | contract | internship |
REMOTE_ONLY | Optional | true | false (default: false) |
MAX_APPLICATIONS | Optional | Max jobs to apply per session (default: 5) |
How to set RESUMEX_API_KEY in OpenClaw:
RESUMEX_API_KEY with the copied key valueNo other keys are needed. There is no Anthropic key, no OpenAI key, no other third-party service.
Read this before using the skill.
| Data | Sent To | Why |
|---|---|---|
| Resume data (API read) | resumex.dev | To fetch your resume for job matching |
| Application logs (company, role, URL, status) | resumex.dev | To track your job applications |
| Your name, email, phone, LinkedIn | Job application websites | To fill application forms |
| Cover letter (generated text) | Job application websites | Submitted as part of each application |
Data sent to resumex.dev is governed by the resumex.dev Privacy Policy.
| Data | Location | What It Contains |
|---|---|---|
data/user_preferences.json | Skill directory only | Salary expectation, visa status, notice period, address, gender, date of birth, ethnicity, veteran status, disability status, screening question answers |
⚠️
user_preferences.jsonmay contain sensitive personal data including date of birth, gender, ethnicity, veteran status, and disability status (only if you choose to save these when prompted during a form fill). This file is stored locally only and is never sent to resumex.dev or any other server. Review and restrict its filesystem permissions if needed:chmod 600 data/user_preferences.jsonTo clear all saved preferences at any time:
python3 scripts/manage_preferences.py reset
The following fields are only saved if you explicitly provide them when the agent encounters a form field that requires them. You can decline to answer, skip the field, or delete a saved value at any time:
gender — only for diversity/EEO formsethnicity — only for diversity/EEO forms (optional, you may leave blank)veteran_status — only for U.S. government/contractor compliance formsdisability_status — only for compliance forms (optional)date_of_birth — only for forms that legally require itThe agent will always tell you which form requires a sensitive field before asking for the value.
The agent NEVER submits an application without your explicit approval. Step 6 of the workflow
always presents a formatted approval list and waits for your response before any browser
interaction begins. If you are testing, set MAX_APPLICATIONS=1 in your environment.
1. Fetch resume data from resumex.dev API
2. Load saved user preferences (salary, visa, screening answers)
3. Build a job-match profile (skills, roles, seniority, preferences)
4. Search the web for matching jobs (3–5 query permutations)
5. Score & rank each job against the resume (0–100)
6. ⛔ APPROVAL GATE — Present formatted list → wait for user selection
7. For each approved job:
a. Generate a tailored cover letter (via OpenClaw's built-in AI)
b. Navigate to application page via browser
c. Fill form fields using resume data + preferences
d. If a required field is unknown → ask the user → save to preferences
e. Submit the application
f. Log to resumex.dev job tracker
8. Present final summary with statuses
Use the agent endpoint. All calls require Authorization: Bearer $RESUMEX_API_KEY.
# Fetch full resume data (GET /api/v1/agent — the correct endpoint)
curl -s -X GET "https://resumex.dev/api/v1/agent" \
-H "Authorization: Bearer $RESUMEX_API_KEY" \
-H "Content-Type: application/json"
Note: The endpoint is
/api/v1/agent(NOT/api/v1/agent/resume, which is deprecated). The helper scripts handle retries with exponential backoff automatically. Install dependencies first:pip3 install -r requirements.txt
Or via the helper script:
# Full resume JSON
python3 scripts/fetch_resume.py
# Extract a specific field for form filling
python3 scripts/fetch_resume.py --field email
python3 scripts/fetch_resume.py --json-path profile.phone
Expected response shape:
{
"success": true,
"data": {
"activeResumeId": "...",
"resumes": [{
"id": "...",
"data": {
"profile": {
"fullName": "...", "email": "...", "phone": "...",
"location": "...", "summary": "...",
"linkedin": "...", "github": "...", "website": "..."
},
"skills": [{"category": "...", "skills": ["...", "..."]}],
"experience": [
{
"role": "...", "company": "...", "location": "...",
"startDate": "...", "endDate": "...", "description": "..."
}
],
"education": [{"degree": "...", "institution": "...", "endDate": "...", "score": "..."}],
"projects": [{"name": "...", "description": "...", "tags": ["..."]}],
"achievements": [{"title": "...", "year": "..."}]
}
}]
}
}
Parse the active resume:
workspace = response.data
activeResume = workspace.resumes.find(r => r.id === workspace.activeResumeId)
resumeData = activeResume.data
Error handling:
| HTTP Code | Cause | Fix |
|---|---|---|
401 | RESUMEX_API_KEY is missing or invalid | Go to resumex.dev → Dashboard → Resumex API → generate a new key |
404 | Resume not created yet | Go to resumex.dev → create and publish your resume |
429 | Rate limited | Wait 10 seconds, retry once |
Check for previously saved preferences that supplement the resume data:
python3 scripts/manage_preferences.py list
This returns any saved answers like salary expectation, visa status, notice period, etc.
If user_preferences.json doesn't exist yet, that's fine — it will be created when the
user is first asked for missing information.
Preference fields to look for:
salary_expectation — e.g. "8-12 LPA" or "$80,000-$100,000"currency — e.g. "INR" or "USD"notice_period — e.g. "30 days" or "Immediate"visa_status — e.g. "No visa required (Indian citizen)"work_authorization — e.g. "Authorized to work in India"willing_to_relocate — true/falsepreferred_work_type — "remote" | "hybrid" | "onsite"screening_answers — dict of previously answered screening questionsFrom the resume JSON and user preferences, extract and infer:
| Field | How to Derive |
|---|---|
| Target roles | Latest experience[0].role + adjacent titles (e.g. "Software Engineer" → "Backend Developer", "Full Stack Developer") |
| Key skills | Top 5–8 from flattened skills[].skills arrays + tech stack from experience[].description |
| Seniority | Years of experience calculated from earliest startDate to today |
| Location | profile.location (override with JOB_SEARCH_LOCATION env var if set) |
| Job type | JOB_TYPE env var or preferred_work_type from preferences (default: full-time) |
| Remote | REMOTE_ONLY env var (default: false) |
| Industry | Infer from company names / job descriptions in experience |
Example derived profile:
Roles: Software Engineer, Backend Developer, Full Stack Developer
Skills: Python, Django, React, PostgreSQL, Docker, AWS
Seniority: Mid-level (3 years)
Location: Pune, India
Type: Full-time
Remote: No preference
Salary: 8-12 LPA (from preferences)
Use web_search to find real, current job postings. Run 3–5 targeted searches using different query permutations to maximize coverage.
Query templates:
"{role}" "{top_skill}" jobs "{location}" site:linkedin.com OR site:naukri.com OR site:indeed.com
"{role}" "{top_skill}" "{second_skill}" hiring 2026
"{role}" remote jobs "{top_skill}" "{seniority}"
"{role}" "{top_skill}" jobs "{location}" "apply now" site:wellfound.com OR site:internshala.com
See references/job_boards.md for complete query patterns per board.
For each search result URL, use web_fetch to extract:
form | easy-apply | email | redirectAim to collect 10–20 raw job postings before scoring.
Score each job 0–100 against the resume profile:
| Factor | Max Points |
|---|---|
| Skill overlap (required skills matched) | 40 |
| Role title match | 20 |
| Seniority match | 15 |
| Location / remote match | 15 |
| Industry familiarity | 10 |
Formula:
score = (skills_matched / skills_required) * 40
+ role_title_match * 20 # 20 if exact, 10 if adjacent, 0 if unrelated
+ seniority_match * 15 # 15 if exact, 8 if ±1 level, 0 if 2+ off
+ location_match * 15 # 15 if match, 8 if remote, 0 if mismatch
+ industry_match * 10 # 10 if same industry, 5 if adjacent
Present the top 10 matches in a formatted table. The user MUST approve before any applications are submitted. Never auto-apply without explicit approval. Never skip this step.
Format:
🎯 Job Match Results for [Name]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# Score Company Role Location Apply Method
── ───── ─────── ──── ──────── ────────────
1 92 Acme Corp Software Engineer Pune (On-site) 🤖 Auto-apply
2 87 TechStartup Backend Developer Remote 🤖 Auto-apply
3 81 MegaCorp India Full Stack Engineer Mumbai 🤖 Auto-apply
4 76 DevShop Python Developer Pune (Hybrid) 🤖 Auto-apply
5 73 CloudCo API Engineer Remote 🤖 Auto-apply
6 70 DataInc Backend Engineer Bangalore 🔗 Manual (LinkedIn)
7 68 StartupXYZ Software Developer Remote 🤖 Auto-apply
8 65 BigTech Junior SWE Hyderabad 🤖 Auto-apply
9 62 ConsultFirm Technical Consultant Pune 📧 Email apply
10 58 SmallCo Full Stack Developer Remote 🤖 Auto-apply
🤖 = Agent will fill and submit the application automatically
🔗 = LinkedIn — agent will open the page, you submit manually
📧 = Email — agent will draft the email, you review and send
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Which jobs would you like to apply to?
Options: "all", "1,3,5", "1-5", "none", or "1-5 except 3"
Apply method classification:
🤖 Auto-apply — Standard form-based application. Agent fills and submits.🔗 Manual (LinkedIn) — LinkedIn Easy Apply. Agent navigates to the page but user must submit.
(LinkedIn automated submission is disabled by default due to ToS concerns.)📧 Email apply — Agent drafts the application email for user review.🔗 Manual (redirect) — Redirects to external ATS. Agent navigates, user may need to complete.Wait for the user to respond with their selection before proceeding.
For each job the user approved, execute the following sub-steps:
No external AI API is used. Cover letters are generated by OpenClaw's own LLM.
The draft_cover_letter.py script reads resume data and job details, then outputs a structured
prompt. OpenClaw's agent uses that prompt with its built-in AI to generate the cover letter.
python3 scripts/draft_cover_letter.py \
--resume /tmp/resume.json \
--job_title "Software Engineer" \
--company "Acme Corp" \
--job_description "We are looking for..." \
--output /tmp/cover_letter_acme.txt
The script outputs the generation prompt to stdout. The agent then:
--output path if specifiedCover letter structure:
Keep it under 200 words. Professional but human tone.
Use the browser tool to navigate to the job's application URL: