Visually review SAR drone-image tiles for signs of a lost person. Use when the user asks to "triage", "search", "scan", "analyze", or "review" prepared SAR tiles — e.g. "start the search on incident013", "keep triaging", "scan the tiles for the hunter". Reads the ingest manifest, batches ~6 keep-tiles at a time, shows each tile to you for ghillie-suit-aware visual analysis, and appends one JSON record per tile to findings.jsonl. Resumable — safe to interrupt and re-invoke; previously-reviewed tiles are skipped automatically.
Perform the Stage-2 multimodal review of SAR drone tiles.
incident013. Used to locate the manifest.Derived: manifest = /home/sarocu/Projects/nova/runs/<incident_id>/manifest.json. If that file doesn't exist, run sar-ingest first.
Read the triage prompt — the criteria you will apply to every tile:
/home/sarocu/Projects/nova/.claude/skills/sar-triage/prompts/triage_prompt.md
Fetch the next batch of unreviewed keep-tiles:
python3 /home/sarocu/Projects/nova/.claude/skills/sar-triage/scripts/next_batch.py \
--manifest /home/sarocu/Projects/nova/runs/<INCIDENT_ID>/manifest.json \
--batch-size 6 \
> /tmp/sar_batch.json
Inspect /tmp/sar_batch.json. If batch_id is null, there is nothing left to process — go to step 6.
View each tile listed in the batch. For every tile entry, use the Read tool on its tile_path. You see the tiles as images. Apply the criteria from the triage prompt to each one.
Emit verdicts as JSON. Produce a JSON array with one object per tile, in the order the batch listed them, using the schema in the triage prompt. Use the exact tile_name from the batch. Write the array to /tmp/sar_verdicts.json.
Persist findings — run the appender. It joins your verdicts with manifest-side GPS/crop_box and atomically appends to findings.jsonl:
cat /tmp/sar_verdicts.json | \
python3 /home/sarocu/Projects/nova/.claude/skills/sar-triage/scripts/append_findings.py \
--manifest /home/sarocu/Projects/nova/runs/<INCIDENT_ID>/manifest.json \
--batch-file /tmp/sar_batch.json \
--model claude-opus-4-7
Check the appender's output: appended count should equal the batch size, unmatched_tile_names should be []. If a tile_name is unmatched, your JSON used a different string than the batch provided — fix and retry.
Loop. Repeat steps 2–5 until next_batch.py reports nothing remaining.
Done. Report back to the user:
findings.jsonl)clear, possible, probable, high_confidence)high_confidence or blaze_orange-cued hits immediatelysar-report to generate the ranked ground-team report.findings.jsonl is the source of truth. Safe to Ctrl-C mid-run. On re-invocation, next_batch.py walks the manifest, skips any (frame_id, tile_index) already present in findings.jsonl, and returns only unprocessed tiles.stage1.verdict == "skip" are never sent to review. If you want to re-enable skipped tiles, re-run sar-ingest after tuning the Stage-1 thresholds.For large incidents, consider launching parallel sub-agents to process batches concurrently. Each sub-agent does steps 2–5 on its own batch. Coordinate by: one next_batch.py call per sub-agent (they can overlap since findings.jsonl dedup + flock handle contention correctly).
findings.jsonl)frame_id "DJI_0401"
tile_index 5
tile_path absolute path
gps {lat, lon, alt_m, timestamp_utc}
crop_box {x, y, w, h}
verdict "clear" | "possible" | "probable" | "high_confidence"
confidence float [0,1]
cues [list of controlled-vocabulary strings]
pixel_location_tile {x,y,w,h} in tile-local coords (or null)
pixel_location_frame {x,y,w,h} in parent-frame coords (auto-computed)
description one or two sentences from the model
alternative_explanation non-human explanation the model considered
raw_response verbatim model output object
reviewed_utc ISO8601 Zulu
model "claude-opus-4-7" or similar
batch_id "b_<ts>_<rand>"