Deep multi-LLM research pipeline that produces validated ship and port pages. 4 phases: reconnaissance → triage → deep research → synthesis → page generation.
Research thoroughly. Build with evidence. Every fact traceable. Soli Deo Gloria.
/investigate "Allure of the Seas"
/investigate "St Lucia"
/investigate --parallel --budget 3.00 "Norwegian Prima"
Mode: cruising (auto-detected in this repository)
Output: A validated ship page or port page, built from multi-LLM research
ships/[line]/[slug].html)ports/[slug].html)bash /home/user/ken/orchestrator/bootstrap-env.sh 2>/dev/null
pip3 install -q -r /home/user/ken/orchestrator/requirements.txt 2>/dev/null
cd /home/user/ken/orchestrator && python3 investigate.py cruising "<subject>"
| Phase | What Happens |
|---|---|
| 1. RECON | Fan-out to 5 models (GPT, Gemini, Perplexity, You.com, Grok). Each researches independently — specs, dining, entertainment, reviews, history, logistics. Claude + GPT deliberate. |
| 2. TRIAGE | Score threads by composite: confidence (40%) + citation density (30%) + multi-model agreement (30%). Drop below threshold. |
| 3. DEEP RESEARCH | Staged deep dives on top 3 threads. Research models verify → Claude synthesizes → analysts evaluate. |
| 4. SYNTHESIS | Cross-thread integration. All citations collected. Conflicts flagged. |
Flags:
--threads N — max threads for deep research (default: 3)--parallel — run deep dives concurrently--budget N.NN — cost ceiling in USD (default: $1.50)--exhaustive — 10 threads, $5 budgetClaude reads state/investigate.json — the structured output with all findings, citations, and cross-thread analysis.
Before generating anything, check if the page already exists.
Does ships/[line]/[slug].html exist?
→ YES: Path B — Merge Mode (enrich existing page)
→ NO: Path A — New Page (build from reference template)
Path B — Merge Mode (existing page):
Read the existing page and extract everything that must be preserved:
<div class="swiper-slide"> block verbatim./restaurants/*.html are already wired. Preserve them. Only add new ones if the investigation found venues that are both real AND have pages.Then diff investigation findings against existing content:
validate-ship-page.sh after merge to catch any inconsistenciesCross-field consistency rules (the validator checks these, but the merge must set them correctly):
related: field — must be present. Standard set: /ships.html, /cruise-lines/royal-caribbean.html, /ports.html, /drink-calculator.html/restaurants.html (per Radiance reference)btn-deck-plans link to the official ship page must appear between the Logbook and Video sectionsPath A — New Page (no existing page):
Build from reference template + investigation output. But still run these pre-generation checks:
Before writing ANY page (new or merge), verify:
src="/assets/..." path must resolve to a real file on disk. Run: ls -la [path]. No phantom references.assets/data/venues.json for the ship slug. If FAQ mentions a venue (150 Central Park, Johnny Rockets, Bionic Bar, etc.) that isn't in the database, flag it as a warning — the dining loader won't render it./ships/[line]/. If a sister exists as a page but wasn't in the investigation output, include it anyway.href must match <main> element's id.After every edit — before git add — run the validator:
bash admin/validate-ship-page.sh ships/[line]/[slug].html
Do NOT commit if the validator reports errors. Fix first, then commit.
This catches:
</div> on slides, orphaned slides outside swiper-wrapper)The validator is the last gate. No matter how confident you are in a regex/Python fix, the validator catches what you miss. A broken carousel that looks fine in the Edit tool will render as a wall of overlapping images in a browser.
Ship pages — follow new-standards/v3.010/SHIP_PAGE_CHECKLIST_v3.010.md:
ships/rcl/radiance-of-the-seas.htmlassets/data/ships/[line]/[slug].page.jsonPort pages — follow new-standards/v3.010/PORT_PAGE_STANDARD_v3.010.md:
ports/nassau.htmlassets/data/ports/port-registry.jsonWrite section-by-section to prevent timeout (matching person-page anti-timeout pattern). Every fact must be traceable to a citation from the investigation.
Run admin/validate-ship-page.sh after generation. The validator now checks (v3.010.301+):
Plus all existing checks: SDG invocation, AI-breadcrumbs, ICP-Lite, JSON-LD, WCAG, images, performance, required sections.
port-content-voice-hook injects Like-a-human guide on every Edit/Writevoice-audit-hook reminds to audit voice before committingTier 1: Official cruise line data (fleet pages, press releases, deck plans)
Tier 2: Industry databases (IMO registry, classification societies, CruiseMapper)
Tier 3: Travel review sites (Cruise Critic, cruise blogs, travel magazines)
Tier 4: User-generated content (forums, social media, Reddit)
Rule: Lower-tier never overrides higher-tier without explicit flagging.
Conflicts documented, not silently resolved.
Never say "search for information about X." Give each model specific queries.
The pipeline now injects required queries automatically via ship_required_queries in cruising.yaml. Each research model (Perplexity, You.com, Gemini) receives mandatory research tasks covering:
These are not optional — they fire on every ship investigation to prevent gaps like missing venue names or stale dining data.
Port investigation example:
Perplexity: "Search for [PORT] cruise terminal: tender or dock? Distance
to town center? Taxi costs? Recent infrastructure changes?"
You.com: "Search for [PORT] top excursions, local restaurants near
the pier, beach access, accessibility for wheelchair users."
Before generating, scan for missing data:
Ship pages: IMO number, guest capacity, crew count, gross tonnage, year built, registry, class, sister ships, deck count, dining venue count, video sources Port pages: Tender/dock status, distance to town, currency, language, timezone, seasonal info, transport costs, top 3-5 excursions, accessibility notes
Flag gaps explicitly — honest gaps are better than fabricated data.
ship-page-validator — automatic validation on ship page writesport-content-builder — port page structure and POI schema knowledgeicp-2 — AI meta tags and answer engine optimizationaccessibility-audit — WCAG 2.1 AA compliancevoice-dna — voice pattern enforcementLike-a-human — humanization guide (injected by hook)Typical: $1-3 per investigation. Exhaustive mode: $3-5.
Soli Deo Gloria — Research thoroughly because these pages help real travelers plan real voyages.