Court case deep analysis — read full opinions, extract parties/allegations/amounts, trace related litigation
TIER 1: DEPTH ANALYSIS — This skill reads full court opinions and extracts structured intelligence from complex litigation. LLMs can process 10-50KB opinion texts, identify every named party and corporate entity, extract specific factual allegations and monetary figures, and cross-reference everything against the investigation database. Record every factual discovery separately. Do not assess the legal merits — extract facts, parties, and money.
/analyze-case "Palantir Technologies" — search CourtListener for cases by party name/analyze-case --docket-id 67890123 — analyze a specific docket/analyze-case --court nysd — filter by court (e.g., scotus, ca2, nysd, cacd)uv run python tools/investigation_context.py show
WORKDIR=$(mktemp -d /tmp/osint-XXXXXXXX)
echo "Session workdir: $WORKDIR"
# Search by party name (uses search API field operators — no 403)
uv run python tools/query_courtlistener.py party "<PARTY_NAME>" --court <COURT> --output $WORKDIR/cl-party.json
# Search RECAP dockets
uv run python tools/query_courtlistener.py cases "<PARTY_NAME>" --court <COURT> --output $WORKDIR/cl-cases.json
# Or get specific docket by ID
uv run python tools/query_courtlistener.py docket <DOCKET_ID> --output $WORKDIR/cl-docket.json
# Search with field operators (combine multiple)
uv run python tools/query_courtlistener.py search --party "<PARTY>" --court <COURT> --output $WORKDIR/cl-search.json
uv run python tools/query_courtlistener.py search --attorney "<ATTORNEY>" --output $WORKDIR/cl-attorney.json
uv run python tools/query_courtlistener.py search --firm "<FIRM>" --output $WORKDIR/cl-firm.json
# Search opinions
uv run python tools/query_courtlistener.py search "<QUERY>" --type o --output $WORKDIR/cl-opinions.json
uv run python tools/query_courtlistener.py search "<QUERY>" --type o --semantic --output $WORKDIR/cl-semantic.json
Review results. For each case:
Prioritize cases that are: (a) filed in the investigation's time window, (b) involve investigation-linked parties, (c) allege fraud, corruption, or regulatory violations.
Use the opinion command to fetch full opinion text directly from the API:
# Fetch by opinion/cluster ID (from search results or docket clusters field)
uv run python tools/query_courtlistener.py opinion <OPINION_ID> --lines 1000
If no opinion ID is available, search for opinions by case name:
uv run python tools/query_courtlistener.py search "<CASE_NAME>" --type o --limit 5
This is the core LLM advantage. A court opinion may be 20-50 pages (100K+ chars). Read it fully and extract:
RECAP documents contain filed court documents (memoranda, transcripts, exhibits, motions) — often richer than the opinion alone. Search for them:
# Search for RECAP documents related to this case
uv run python tools/query_courtlistener.py recap-search "<CASE_NAME> <KEY_TERMS>" --court <COURT> --limit 20
For each valuable document found (memoranda, government proffers, exhibit lists, sentencing memos):
# Download the PDF and extract text
uv run python tools/query_courtlistener.py download "<DOWNLOAD_URL>" $WORKDIR/doc-<NUM>.pdf --extract-text
Then read the extracted text file ($WORKDIR/doc-<NUM>.txt) for analysis. RECAP documents are free to download — the PDFs are hosted on storage.courtlistener.com.
Priority documents to look for:
DB-first principle: Record findings from each document as you read it, not after reading all documents. If you run out of context mid-analysis, unrecorded observations are lost.
Read the opinion text systematically. Do not skim — process the entire document.
For each name, immediately check:
uv run python tools/entity_tracker.py lookup --name "<NAME>"
uv run python tools/findings_tracker.py search "<NAME>" --output $WORKDIR/xref-<slug>.json
Once you know the parties, search for their other litigation and trace the citation graph:
# Other cases by same defendant (field operator)
uv run python tools/query_courtlistener.py party "<DEFENDANT>" --output $WORKDIR/cl-related-def.json
# Other cases by same plaintiff
uv run python tools/query_courtlistener.py party "<PLAINTIFF>" --output $WORKDIR/cl-related-plt.json
# Citation graph — what does this opinion cite and what cites it?
uv run python tools/query_courtlistener.py citations <OPINION_ID> --output $WORKDIR/cl-citations.json
# Resolve a specific citation to a cluster ID
uv run python tools/query_courtlistener.py resolve-cite "<CITATION_TEXT>" --output $WORKDIR/cl-resolve.json
# Get opinion cluster detail (panel composition, citation count)
uv run python tools/query_courtlistener.py cluster <CLUSTER_ID> --output $WORKDIR/cl-cluster.json
# Semantic search for conceptually related opinions
uv run python tools/query_courtlistener.py search "<LEGAL_THEORY>" --type o --semantic --output $WORKDIR/cl-related-opinions.json
# FJC database — search federal case metadata by defendant
uv run python tools/query_courtlistener.py fjc --defendant "<DEFENDANT>" --output $WORKDIR/cl-fjc.json
Look for patterns:
If the case involves a judge whose impartiality matters:
# Full career timeline (positions, education, political affiliations, appointer)
uv run python tools/query_courtlistener.py career "<JUDGE_NAME>" --output $WORKDIR/cl-career.json
# Check investments for conflicts with case parties (1.9M records searchable)
uv run python tools/query_courtlistener.py investments "<COMPANY_NAME>" --output $WORKDIR/cl-investments.json
# Check travel reimbursements (who paid for judge's travel?)
uv run python tools/query_courtlistener.py reimbursements "<SOURCE>" --output $WORKDIR/cl-reimb.json
# Financial disclosures by person ID
uv run python tools/query_courtlistener.py disclosures --person-id <JUDGE_ID> --output $WORKDIR/cl-disclosures.json
Investment conflict check: For each party in the case, search judge investments:
investments "<PARTY_COMPANY>" — does the judge hold stock in a party?reimbursements "<PARTY>" — did a party pay for judge travel/speaking?# Every party name → check entities and findings
uv run python tools/entity_tracker.py lookup --name "<PARTY>"
uv run python tools/findings_tracker.py search "<PARTY>" --output $WORKDIR/xref-<slug>.json
# Case timeline dates → compare against key_dates
uv run python -c "
import yaml
with open('investigations/<ACTIVE>/config.yaml') as f:
dates = yaml.safe_load(f).get('key_dates', [])
for d in dates: print(d)
"
# Monetary figures → compare against known financial flows
uv run python -c "
import sqlite3
db = sqlite3.connect('investigation.db')
db.row_factory = sqlite3.Row
rows = db.execute('SELECT * FROM findings WHERE finding_type=\"financial\" AND target_name LIKE ?', ('%<PARTY>%',)).fetchall()
for r in rows: print(f'#{r[\"id\"]} {r[\"summary\"][:100]}')
"
One finding per factual discovery from the opinion:
# Direct quote from court opinion (highest confidence)
uv run python tools/findings_tracker.py add \
--target "<PARTY>" \
--summary "<Factual finding stated by the court>" \
--type legal \
--evidence "CourtListener:<DOCKET_ID>" \
--claim-type direct_quote \
--source-quote "CourtListener:<DOCKET_ID>:exact text from opinion" \
--sources courtlistener \
--confidence confirmed
# Paraphrased allegation
uv run python tools/findings_tracker.py add \
--target "<PARTY>" \
--summary "<Summary of allegation from complaint/opinion>" \
--type legal \
--evidence "CourtListener:<DOCKET_ID>" \
--claim-type paraphrase \
--source-quote "CourtListener:<DOCKET_ID>:relevant text" \
--sources courtlistener \
--confidence high
# Cross-case inference
uv run python tools/findings_tracker.py add \
--target "<PARTY>" \
--summary "<Observation from comparing related cases>" \
--type legal \
--evidence "CourtListener:<DOCKET_ID_1>;CourtListener:<DOCKET_ID_2>" \
--claim-type inference \
--source-quote "CourtListener:<ID>:relevant text" \
--sources courtlistener \
--confidence medium
Register discovered entities and relationships:
uv run python tools/entity_tracker.py add-entity --name "<ENTITY>" --entity-type <TYPE> --source "CourtListener:<DOCKET_ID>"
uv run python tools/findings_tracker.py connect --person-a "<PLAINTIFF>" --person-b "<DEFENDANT>" --type legal --strength strong --evidence "CourtListener:<DOCKET_ID>"
# New party discovered
uv run python tools/lead_tracker.py add \
--title "Investigate <PARTY> — named in <CASE_NAME>, alleged <ALLEGATION>" \
--category person --priority medium \
--target "<PARTY>" --source "agent:analyze-case"
# Related case worth analyzing
uv run python tools/lead_tracker.py add \
--title "Analyze case: <RELATED_CASE_NAME> — related to <ORIGINAL_CASE>" \
--category case --priority medium \
--target "<RELATED_CASE>" --source "agent:analyze-case"
# Referenced regulatory proceeding
uv run python tools/lead_tracker.py add \
--title "Investigate <AGENCY> proceeding against <PARTY> — referenced in court opinion" \
--category legal --priority medium \
--target "<PARTY>" --source "agent:analyze-case"
# Judge conflict of interest
uv run python tools/lead_tracker.py add \
--title "Judge <NAME> financial conflict — disclosed investments overlap with <PARTY>" \
--category person --priority high \
--target "<JUDGE_NAME>" --source "agent:analyze-case"
A human reading a court opinion focuses on the holding — did the plaintiff win? An investigation agent reads the factual background section, which is where the court lays out exactly what happened: who met whom, what was said, what money moved, what documents exist. Judges are required to be specific about facts. These factual findings are often the most reliable narrative of events available anywhere — more detailed than news reports, more specific than regulatory filings.
An LLM agent reads the entire opinion, extracts every named party and entity, maps every date and dollar amount, and cross-references everything against the investigation. This surfaces: