Temporal correlation — activity clusters, suspicious timing, silence periods, coordinated action windows
LAYER 2: ANALYSIS AGENT — This is a theory-building skill. You identify temporal patterns and generate hypotheses, but every hypothesis MUST produce a testable prediction queued as a research lead for Layer 1 agents. Temporal proximity is suggestive, not conclusive — always distinguish "these events happened near each other" (fact) from "these events were coordinated" (theory). See research/INVESTIGATIVE_METHODOLOGY.md#framework-discipline.
Analyze findings and external events on a timeline to find activity clusters, suspicious timing, silence periods, coordinated action windows, and "before the raid" patterns.
--thread N: focus on a specific thread--window YYYY-MM-DD YYYY-MM-DD: analyze a specific date rangeLoad the active investigation context before executing:
uv run python tools/investigation_context.py show
This provides: primary_subject, key_persons, threads, corpus_tools, key_dates, known_addresses. Use these values instead of hardcoded names throughout this skill.
WORKDIR=$(mktemp -d /tmp/osint-XXXXXXXX)
echo "Session workdir: $WORKDIR"
uv run python -c "
from tools.analysis_export import start_analysis_run
run_id = start_analysis_run('timeline-analysis')
print(f'Analysis run #{run_id}')
"
uv run python tools/analysis_export.py timeline-export --output $WORKDIR/timeline.json
uv run python tools/analysis_export.py findings-dump --output $WORKDIR/findings.json
uv run python tools/event_timeline.py list --limit 200 -v --output $WORKDIR/events.json
Many findings have NULL date_of_event but mention dates in their summary/detail text. Scan findings for date patterns:
For each finding with an extractable date, note it for analysis. Optionally backfill date_of_event:
uv run python -c "
import sqlite3
db = sqlite3.connect('investigation.db')
db.execute('UPDATE findings SET date_of_event = ? WHERE id = ?', ('YYYY-MM-DD', FINDING_ID))
db.commit()
"
Only backfill when extraction is high confidence (exact date mentioned, not inferred).
a) Activity bursts Find clusters where 3+ findings occur within a 14-day window:
b) Cross-reference with events For each activity cluster, check the event timeline:
uv run python tools/event_timeline.py window --start YYYY-MM-DD --end YYYY-MM-DD
What external events coincide? Arrests, filings, media reports, elections?
c) Pre-event activity Look for activity spikes in the 30 days BEFORE major events (arrests, lawsuits, media exposure):
uv run python tools/investigation_context.py show --json) to identify the critical dates to check.d) Silence periods For active targets (10+ findings), find gaps of 30+ days with no findings. Compare against:
e) Coordinated action windows Find weeks where 2+ unrelated targets show activity simultaneously:
Load key dates and time periods from the active investigation profile:
uv run python tools/investigation_context.py show --json
The profile's key_dates field contains investigatively significant periods and events. For each period, check for patterns including:
For temporal patterns discovered:
uv run python tools/findings_tracker.py add \
--target "TARGET_NAME" \
--type financial \
--summary "TEMPORAL PATTERN" \
--detail "DETAIL with dates and cross-references" \
--confidence medium \
--claim-type synthesis \
--evidence "analysis-run-{RUN_ID}" \
--source-quote "timeline analysis: N events in WINDOW"
uv run python tools/tag_manager.py bulk-tag --table findings --ids ID1,ID2 \
--type temporal --value "PATTERN_NAME" --created-by "agent:timeline-analysis"
For unexplained timing correlations. Every hypothesis MUST include:
uv run python tools/hypothesis_tracker.py add \
--title "TEMPORAL HYPOTHESIS" \
--pattern-type temporal \
--description "PATTERN observed in WINDOW. Involves: TARGETS. Correlation with: EVENT. INNOCENT EXPLANATION: [best alternative]. FALSIFICATION: [what would disprove this]." \
--predicted-evidence "If coordinated, expect shared communication or intermediary" \
--search-plan "1. Check email corpus for TARGETS in WINDOW 2. Check entity formations 3. Cross-ref financial records" \
--originated-from "analysis:timeline-analysis"
Each hypothesis should spawn at least one Layer 1 research lead to test it:
uv run python tools/lead_tracker.py add \
--title "Test temporal hypothesis: [BRIEF DESCRIPTION]" \
--category connection \
--priority medium \
--source "agent:timeline-analysis" \
--description "Hypothesis: [DESCRIPTION]. Search plan: [PLAN]. Falsification: [CRITERION]."
If analysis reveals important external events not in the timeline, add them:
uv run python tools/event_timeline.py add \
--date YYYY-MM-DD \
--name "EVENT_NAME" \
--category legal \
--description "DESCRIPTION" \
--relevance "WHY IT MATTERS"
Write to $WORKDIR/report-timeline-analysis.md:
# Timeline Analysis Report — [DATE]
## Activity Clusters
[List clusters with dates, finding counts, coinciding events]
## Pre-Event Activity Patterns
[Activity spikes before major events]
## Silence Periods
[Gaps in activity for active targets]
## Coordinated Action Windows
[Multiple targets active in same window]
## Key Period Analysis
[Analysis of each significant time period]
## Temporal Hypotheses Generated
[List hypotheses with IDs]
## Events Added to Timeline
[New events discovered during analysis]
uv run python -c "
from tools.analysis_export import complete_analysis_run
complete_analysis_run(RUN_ID, findings_created=N, hypotheses_created=M,
leads_created=L, tags_created=T,
report_path='$WORKDIR/report-timeline-analysis.md')
"
date_of_event — extract dates from text where possible