Use when the simulation produced outputs or failure evidence and the user wants to interpret results against the original experiment goal and investigate likely causes.
Interpret completed run outputs against the original experiment goal. Make claims only to the extent supported by output files, case inputs, and repo code.
Do not use this skill to initialize the repo or to define a new experiment from scratch. Do not use this skill for pre-run readiness, case-file gating, or execution-path validation.
experiment-spec.md.buffer full, Potential Deadlock, , etc.)No task completedrunlog/network_attribute.txt against a known-good baseline caseStay in this skill when:
Hand off to openusim-plan-experiment when:
Before handoff, record in experiment-spec.md:
openusim-run-experiment, direct requests about existing outputsopenusim-plan-experiment../openusim-references/throughput-evidence.md../openusim-references/trace-observability.md../openusim-references/queue-backpressure-vs-topology.md../openusim-references/transport-channel-modes.md../openusim-references/throughput-evidence.md../openusim-references/trace-observability.md../openusim-references/queue-backpressure-vs-topology.md../openusim-references/transport-channel-modes.mdWhen the run failed or produced unexpected results, check in this order:
Is it a simulation result or a toolchain bug?
buffer full. Packet Dropped!, Potential Deadlock, No task completed → simulation result, not a bugCompare against baseline:
scratch/2nodes_single-tp)network_attribute.txt to identify parameter differencesFlowControl, buffer sizes, EnableRetrans, routing modeForm a single-variable hypothesis:
Record durable findings in experiment-spec.md: