Use when running or rerunning a Messy Virgo sleeve token screen for one fund and sleeve, when persisting a sleeve/day screening run with run_date, or when inspecting stored runs or historical indicator details.
screening context get is the source of truth for the workflow. Persist one sleeve/day run row with screening runs create before reporting success.
Out of scope: editing saved queries, workflow, or sleeve instructions — use mv-screening-configuration.
kind: "template" steps resolve from mv screening templates get <id> --json or a prior mv screening templates list --json result.kind: "query" steps resolve from custom_queries[] already returned by screening context get. There is no separate query-get command.fund_id and sleeve_id.sleeve_id is unknown, run mv funds sleeves list <fund_id> --json.run_datesnapshot_date for ordinary "screen now" flows.mv screening snapshot get <fund_id> <sleeve_id> --jsonmv screening --help
mv screening runs --help
mv screening templates --help
mv screening screen --example
mv screening screen --schema
mv screening runs create --example
mv screening runs create --schema
mv screening context get <fund_id> <sleeve_id> --jsonmv screening templates get <id> --json or an already-loaded template listcustom_queries[].query_idmv screening screen --examplemv screening screen --schemamv screening runs create --examplemv screening runs create --schemamv screening snapshot get <fund_id> <sleeve_id> --jsonmv screening screen <fund_id> <sleeve_id> --file <request.json> --json
snapshot_date, screening targets today's UTC date.token_universe_runs for the sleeve universe/date.SNAPSHOT_NOT_READY (409); do not treat that as an empty-result success.snapshot_date returned by each screen response.universe_run and coverage from the screen response. When results is empty, use these fields to distinguish "no joined indicator rows" (indicator_rows_joined=0) vs "filters removed all rows" (rows_after_filters=0 with indicator_rows_joined>0), and reference the counters in narrative/meta.ref, status, intent, and resolved_request. Missing refs → failed_validation; missing runtime inputs → skipped_missing_input; command errors → failed_error.candidate_reason text.run_date (UTC day from run start), snapshot_date, and shortlist candidates.mv screening runs create <fund_id> --file <payload.json> --json — success requires a returned screen_run_id.mv screening runs get <fund_id> <screen_run_id> --jsonmv screening runs list <fund_id> --sleeve-id <sleeve_id> --jsonmv screening indicators get <fund_id> --snapshot-date <YYYY-MM-DD> --chain <chain> --contract-address <address> --jsonsleeve_id, run_date, process_narrative, structured execution_trace, run_catalog, and candidate rows with token_id, rank, and candidate_reason.run_date is the UTC day fixed at run start (business key with sleeve_id); do not recalculate at completion time.snapshot_date in the run payload; do not substitute screened_at.coverage/universe_run evidence in process_narrative and/or payload meta so users can see whether data was unavailable or filtered out.execution_trace must be JSON-shaped (dict or list, not plain text).1..10.candidate_reason: name the 1–3 most relevant DD indicators with actual scores, explain why they matter for this sleeve, state why the token is worth follow-up. Use full indicator names (Relative Strength not RS). Never put a raw token_id in user-facing prose.custom_queries[] from context.screen or runs create.run_date or setting it from finish time instead of run-start UTC day.chain or contract_address as machine identity when persistence expects token_id.runs create returns a screen_run_id.mv-screening-configuration changes what future runs execute; this skill records one completed sleeve/day run and can optionally inspect run/indicator history.