Research North American mountain peaks and generate comprehensive route beta reports
Research mountain peaks across North America and generate comprehensive route beta reports combining data from multiple sources including PeakBagger, SummitPost, WTA, AllTrails, weather forecasts, avalanche conditions, and trip reports.
Data Sources: This skill aggregates information from specialized mountaineering websites (PeakBagger, SummitPost, Washington Trails Association, AllTrails, The Mountaineers, and regional avalanche centers). The quality of the generated report depends on the availability of information on these sources. If your target peak lacks coverage on these websites, the report may contain limited details. The skill works best for well-documented peaks in North America.
Use this skill when the user requests:
Examples:
Research Progress:
Goal: Identify and validate the specific peak to research.
Extract Peak Name from user message
Search PeakBagger using peakbagger-cli:
uvx --from git+https://github.com/dreamiurg/[email protected] peakbagger peak search "{peak_name}" --format json
Handle Multiple Matches:
If multiple peaks found: Use AskUserQuestion to present options
If single match found: Confirm with user
If no matches found:
Extract Peak ID:
peak_id fieldGoal: Get detailed peak information and coordinates needed for location-based data gathering.
This phase must complete before Phase 3, as coordinates are required for weather, daylight, and avalanche data.
Retrieve detailed peak information using the peak ID from Phase 1:
uvx --from git+https://github.com/dreamiurg/[email protected] peakbagger peak show {peak_id} --format json
This returns structured JSON with:
Error Handling:
Once coordinates are obtained from this step, immediately proceed to Phase 3.
Goal: Gather comprehensive route information from all available sources.
Execution Strategy: Run Python script for deterministic API data + dispatch specialized agents in parallel for web research. This hybrid approach minimizes token usage while maximizing parallelism.
Run the conditions fetcher script to gather all API-based data:
cd "{repo_root}/skills/route-researcher/tools"
uv run python fetch_conditions.py \
--coordinates "{latitude},{longitude}" \
--elevation {elevation_m} \
--peak-name "{peak_name}" \
--peak-id {peak_id}
This returns JSON with:
Run this in parallel with Step 3B — include both the Bash command for fetch_conditions.py and all 3 Task calls in the same response turn to maximize parallelism.
Dispatch 3 Researcher agents in a single message (all Task calls together). Each agent researches assigned sources and fetches trip report content directly.
Agent 1: PeakBagger + SummitPost
Task(
subagent_type="general-purpose",
model="sonnet",
prompt="""You are a route researcher gathering mountaineering data for {peak_name}.
## Your Assignment
Research from these sources: PeakBagger, SummitPost
## PeakBagger Research
1. Search: "{peak_name} site:peakbagger.com"
2. Extract route descriptions from peak page
3. List recent ascents with trip reports:
```bash
uvx --from git+https://github.com/dreamiurg/[email protected] peakbagger peak ascents {peak_id} --format json --with-tr --limit 20
Identify trip reports with content (word_count > 0)
Fetch content for up to 5 recent trip reports using:
uvx --from git+https://github.com/dreamiurg/[email protected] peakbagger ascent show {ascent_id} --format json
Search: "{peak_name} site:summitpost.org"
Use WebFetch to extract: route name, difficulty, approach, description, hazards
If WebFetch fails, use:
uv run python {repo_root}/skills/route-researcher/tools/cloudscrape.py "{url}"
For each report fetched, extract: date, author, route conditions, gear mentioned, hazards.
{
"sources": ["PeakBagger", "SummitPost"],
"route_info": [
{"source": "...", "name": "...", "difficulty": "...", "description": "...", "hazards": [...]}
],
"trip_reports": [
{"source": "...", "date": "...", "author": "...", "url": "...", "summary": "...", "conditions": "...", "has_gpx": false}
],
"gaps": ["what couldn't be fetched and why"]
}
```"""
)
Agent 2: WTA + Mountaineers
Task(
subagent_type="general-purpose",
model="sonnet",
prompt="""You are a route researcher gathering mountaineering data for {peak_name}.
## Your Assignment
Research from these sources: WTA, Mountaineers.org
## WTA Research
1. Search: "{peak_name} site:wta.org"
2. Find the hike page and extract: trail name, difficulty, distance, elevation gain, hazards
3. Get trip reports from AJAX endpoint: {wta_url}/@@related_tripreport_listing
4. Fetch content for up to 5 recent trip reports using:
```bash
uv run python {repo_root}/skills/route-researcher/tools/cloudscrape.py "{trip_report_url}"
If WebFetch fails for any page, use cloudscrape.py as shown above.
{
"sources": ["WTA", "Mountaineers"],
"route_info": [
{"source": "...", "name": "...", "difficulty": "...", "description": "...", "hazards": [...]}
],
"trip_reports": [
{"source": "...", "date": "...", "author": "...", "url": "...", "summary": "...", "conditions": "...", "has_gpx": false}
],
"gaps": ["what couldn't be fetched and why"]
}
```"""
)
Agent 3: AllTrails
Task(
subagent_type="general-purpose",
model="sonnet",
prompt="""You are a route researcher gathering mountaineering data for {peak_name}.
## Your Assignment
Research from AllTrails
## AllTrails Research
1. Search: "{peak_name} site:alltrails.com"
2. Use WebFetch to extract: trail name, difficulty, distance, elevation gain, route type, best season, hazards
3. If WebFetch fails, use:
```bash
uv run python {repo_root}/skills/route-researcher/tools/cloudscrape.py "{url}"
{
"sources": ["AllTrails"],
"route_info": [
{"source": "...", "name": "...", "difficulty": "...", "distance_miles": N, "elevation_gain_ft": N, "description": "...", "hazards": [...]}
],
"trip_reports": [],
"gaps": ["what couldn't be fetched and why"]
}
```"""
)
Execute all 3 agents in parallel by including all Task calls in a single response.
After Python script and all agents return, aggregate into unified data structure:
{
"conditions": { /* from fetch_conditions.py */ },
"route_data": {
"sources": [ /* merged from all 3 agents */ ],
"trip_reports": [ /* merged from all agents */ ]
},
"gaps": [ /* merged gaps from all sources */ ]
}
Partial Failure Handling:
Run WebSearch for access information:
WebSearch queries:
1. "{peak_name} trailhead access"
2. "{peak_name} permit requirements"
3. "{peak_name} forest service road conditions"
Extract trailhead names, required permits, access notes. Add to route_data.
Goal: Analyze gathered data to determine route characteristics and synthesize information.
Based on route descriptions, elevation, and gear mentions, classify as:
Goal: Combine trip reports and route descriptions from Step 3B researcher agents, plus conditions data from Step 3A, into comprehensive route beta.
Source Priority:
Synthesis Pattern for Route, Crux, and Hazards:
Example (Route Description):
"The standard route follows the East Ridge (Class 3). Multiple trip reports mention a well-cairned use trail branching right at 4,800 ft—this is the correct turn. The use trail climbs through talus (described as 'tedious' and 'ankle-rolling'). In early season, this section may be snow-covered, requiring microspikes."
Apply this pattern to:
Extract Key Information:
From all synthesized data, identify:
Explicitly document what data was not found or unreliable:
Goal: Create comprehensive Markdown document by dispatching Report Writer agent.
Organize all gathered and analyzed data into structured JSON:
{
"peak": {
"name": "{peak_name}",
"id": {peak_id},
"elevation_ft": {elevation},
"coordinates": [{latitude}, {longitude}],
"location": "{location}",
"peakbagger_url": "{url}"
},
"conditions": {
// From fetch_conditions.py output
"weather": {...},
"air_quality": {...},
"daylight": {...},
"avalanche": {...},
"peakbagger": {...}
},
"route_data": {
// Merged from all Researcher agents
"sources": [...],
"trip_reports": [...]
},
"analysis": {
// From Phase 4
"route_type": "{hike|scramble|technical|glacier}",
"difficulty": "{rating}",
"crux": "{description}",
"hazards": [...],
"time_estimates": {...},
"access": {...}
},
"gaps": [...]
}
Task(
subagent_type="general-purpose",
model="sonnet",
prompt="""You are a Report Writer generating a mountaineering route report.
## Instructions
1. **Read the report template:**
Use the Read tool to read: {repo_root}/skills/route-researcher/assets/report-template.md
2. **Generate report following template structure exactly:**
- Header with peak name, elevation, location, date
- AI disclaimer (prominent safety warning)
- Overview: route type, difficulty, distance/gain, time estimates
- Route Description: synthesized from sources, include landmarks
- Crux: describe hardest section with specifics
- Known Hazards: comprehensive list
- Current Conditions: weather forecast, freezing levels, air quality, daylight
- Trip Reports: links organized by source with dates
- Information Gaps: explicitly list missing data
- Data Sources: links to all sources used
3. **Markdown Formatting Rules:**
- ALWAYS add blank line before lists
- ALWAYS add blank line after section headers
- Use `-` for bullets (not `*` or `+`)
- Use `**text**` for bold emphasis
- Break paragraphs >4 sentences
4. **Save the report:**
Use the Write tool to save to the user's current working directory: {date}-{peak-name-slug}.md
## Data Package
{data_package_json}
## Output Format (return EXACTLY this JSON)
```json
{
"status": "SUCCESS",
"file_path": "/absolute/path/to/report.md",
"filename": "YYYY-MM-DD-peak-name.md",
"sections_generated": N
}
```"""
)
Extract file_path from agent's JSON response for use in Phase 6.
Goal: Validate report quality by dispatching Report Reviewer agent.
Task(
subagent_type="general-purpose",
model="opus",
prompt="""You are a Report Reviewer validating a mountaineering route report.
## Instructions
1. **Read the report:**
Use the Read tool to read: {report_file_path}
2. **Perform systematic quality checks:**
**Factual Consistency:**
- Dates match their stated day-of-week (e.g., "Thu Nov 6, 2025" is actually Thursday)
- Coordinates, elevations, distances consistent across all mentions
- Weather forecasts align logically (freezing levels match precipitation types)
**Mathematical Accuracy:**
- Elevation gains add up correctly
- Time estimates reasonable given distance and elevation gain
- Unit conversions correct (feet to meters, etc.)
**Internal Logic:**
- Hazard warnings align with route descriptions
- Recommendations match current conditions
- Crux descriptions match overall difficulty rating
**Completeness:**
- No placeholder texts like {{peak_name}} or {{YYYY-MM-DD}}
- All referenced links actually provided
- Mandatory sections present: Overview, Route, Current Conditions, Trip Reports, Information Gaps, Data Sources
**Formatting:**
- Markdown headers properly structured
- Lists have blank lines before them
- Tables properly formatted
**Safety & Responsibility:**
- AI disclaimer present and prominent
- Critical hazards properly emphasized
- Users directed to verify information from primary sources
3. **Fix issues:**
- **Critical** (safety errors, factual errors, missing disclaimers): MUST fix using Edit tool
- **Important** (completeness, consistency): SHOULD fix
- **Minor** (formatting, polish): FIX if quick
## Output Format (return EXACTLY this JSON)
```json
{
"status": "PASS" | "PASS_WITH_FIXES" | "FAIL",
"issues_found": N,
"fixes_applied": ["description of fix 1", "description of fix 2"],
"remaining_issues": ["issues that couldn't be fixed"],
"report_path": "/absolute/path/to/report.md"
}
```"""
)
Handle the reviewer agent's response:
report_pathremaining_issues to user and ask for guidanceThe Report Reviewer automatically fixes issues and returns the corrected file path.
Goal: Inform user of completion and next steps.
Report to user:
Example completion message:
Route research complete for Mount Baker!
Report saved to: 2025-10-20-mount-baker.md
Summary: Mount Baker via Coleman-Deming route is a moderate glacier climb (Class 3) with significant crevasse hazards. The route involves 5,000+ ft elevation gain and typically requires an alpine start. Weather and avalanche forecasts are included.
Next steps: Review the report and verify current conditions before your climb. Remember that mountain conditions change rapidly - check recent trip reports and weather forecasts immediately before your trip.
Throughout execution, follow these error handling guidelines:
Every generated report must:
The route-researcher skill uses a hybrid architecture combining Python scripts and LLM agents:
Components:
tools/fetch_conditions.py) - Deterministic API calls for weather, air quality, daylight, avalanche, and PeakBagger dataBenefits:
See skills/route-researcher/docs/architecture.md for detailed execution flow and data contracts.
Implemented:
{wta_url}/@@related_tripreport_listing)fetch_conditions.py) - NWAC region and URL by coordinatesPending Implementation:
When Python scripts are not yet implemented:
All commands use --format json for structured output. Run via:
uvx --from git+https://github.com/dreamiurg/[email protected] peakbagger <command> --format json
Available Commands:
peak search <query> - Search for peaks by namepeak show <peak_id> - Get detailed peak information (coordinates, elevation, routes)peak stats <peak_id> - Get ascent statistics and temporal patterns
--within <period> - Filter by period (e.g., '1y', '5y')--after <YYYY-MM-DD> / --before <YYYY-MM-DD> - Date filterspeak ascents <peak_id> - List individual ascents with trip report links
--within <period> - Filter by period (e.g., '1y', '5y')--with-gpx - Only ascents with GPS tracks--with-tr - Only ascents with trip reports--limit <n> - Max ascents to return (default: 100)ascent show <ascent_id> - Get detailed ascent informationNote: For comprehensive command options, run peakbagger --help or peakbagger <command> --help
Common variations to try if initial search fails:
Google Maps (for summit coordinates):