Help WEPPcloud users set up and run projects, understand and interpret WEPP results (runoff, sediment delivery, peak flows, water balance), compare scenarios, and draft a clear technical report. Use when a user shares a WEPPcloud run link or run ID and asks what the outputs mean, how to download/organize results, how to summarize findings, or how to write a report based on WEPPcloud results.
If any critical context is missing (area, time window, scenario differences), ask 1–2 targeted questions and proceed with a provisional interpretation.
Keep this user-facing and UI-agnostic:
See references/user-workflows.md.
Ask the user to download/export the key artifacts WEPPcloud provides (tables + maps) and point you to them (upload, paste snippets, or describe where they are in the UI).
Minimum recommended working set for interpretation:
See references/outputs-and-units.md.
When explaining results, always state:
Use references/comparison-checklist.md when comparing scenarios.
Use assets/report-outline.md as the default structure. Populate with:
Always include:
If the user asks about short return periods (2/5/10-year) and wants event-date comparisons, request the event-by-event export (ebe_pw0.txt) for each scenario and run:
scripts/return_period_compare.py --burned-ebe <path> --undisturbed-ebe <path> --start-year <yyyy>This produces CSV tables (CTA + AM) plus an overlay histogram of peak discharge events.
If (and only if) the user is an internal operator with filesystem or stack access, the following references describe backend-level resolution and report tooling:
references/operator-wctl-workflows.mdreferences/operator-run-directory-layout.mdreferences/operator-wepppy-reports.mdreferences/operator-weppcloudr-rendering.mdrun_dir when you actually have backend access.