Who actually benefits — and at whose expense? Power structure analysis for any domain: geopolitics, corporate behavior, institutional claims, media narratives. Use when (1) who benefits from X, (2) ownership or revolving doors, (3) are these sources independent or echoing one origin, (4) expert has commercial stake in own analysis, (5) multi-polar analysis needed, (6) evaluating institutional claims or frameworks. NOT for stock prices, simple advice, or quick factual lookups.
Seed question: Who actually benefits from this — and at whose expense?
Cui bono maps how power actually moves through systems — tracing beneficial ownership, revolving doors, supply chains, institutional incentives, and geopolitical alignments. It applies the same analytical rigor to all actors regardless of alignment.
Key insight: Conclusions should EMERGE from rigorous multi-perspective analysis, not from ideological prescription. Same evidence standards apply to all actors.
Motto: Relentless self-reflexive dialectical thinking that questions its own premises.
/SKILL.md # This file
/01_INVESTIGATION_METHODOLOGY.md # Full multi-lens framework, named techniques
/02_QUICK_REFERENCE.md # One-page operational reference
/02_METHODOLOGY_REFERENCE.md # Detailed technique backgrounds
/SOURCE_CLASSIFICATIONS.md # External data source assessments (Layer 1)
/SOURCE_DIVERSITY_FRAMEWORK.md # Journalistic/analytical source positioning (Layer 2)
/lenses/ # Domain-specific investigation templates
environmental.md
weapons.md
labor.md
governance.md
supply_chain.md
geopolitical.md
Before investigation, establish user's framework:
Do not assume. Ask explicitly.
For each relevant lens:
CLAIMS: Document entity's self-presentation and consensus narrative.
CONTRADICTIONS (four methods):
RESOLUTION: What's actually true given both? (Not "who wins")
SECOND ANTITHESIS (after resolution): Apply externality framing to the resolution itself:
Example (from vibe coding research): Even if AI coding tools work, who pays the maintenance cost? 12:1 contributor-to-maintainer time ratio means externalized burden.
Map beneficiaries of EVERY side of EVERY claim — including the debunking/correction narrative.
When investigating a claim, the natural bias is to apply cui bono to the claim while treating the rebuttal side as neutral arbiters. This is asymmetric skepticism — the most persistent flaw identified in application (see proof-of-life postmortem, March 2026: recurred three times at increasing meta-levels).
For each claim AND its rebuttal:
The principle: Separate methodology from institution. Procedures are evaluated by internal validity. Sources are evaluated by stakeholder position. Never confuse the two.
Validated March 2026: Cited domestic media (interested party) and fact-checking ecosystem members (shared methodology pool) as "verification" without mapping their incentive structures. Conclusion was correct; process was flawed.
When an unexamined dichotomy is identified (dead/alive, good/bad, ally/adversary), investigate at least one position between the binary poles.
Do not just note the dichotomy exists — actively investigate the middle. "Is the leader injured, incapacitated, or operating under constraints?" is a more productive question than "Is he dead or alive?"
The middle ground is often where the actual story lives.
Evidence is tiered by independence from interested parties, not by authority of source.
| Tier | Definition | Example |
|---|---|---|
| Verified | Multiple independent physical evidence streams, no institutional single-point-of-failure | In-person confirmation with identifiable witnesses |
| Well-Supported | Strong convergent evidence, but sourcing ecosystem has shared dependencies | Multiple fact-checkers citing same forensic expert |
| Credible | Consistent with available evidence, no contradicting evidence, but limited independent confirmation | Single reputable source with published methodology |
| Contested | Evidence exists on multiple sides | Conflicting sources or potential bias |
| Unsubstantiated | Claim exists without supporting evidence | |
| Debunked | Claim contradicted by strong independent evidence |
Critical distinction: "Who said it" is metadata about the evidence, not the evidence itself. Evidence-structure tiering replaces authority-based tiering.
Before claiming "multiple sources confirm," map citation and dependency chains to identify actual independent evidence nodes vs downstream amplifiers.
[Evidence Node A] → [Downstream 1] → [Downstream 2]
→ [Downstream 3]
[Evidence Node B] → [Downstream 4] ← GENUINELY INDEPENDENT
"5 sources confirm" may actually be 2 evidence nodes + 3 amplifiers.
Source topology reveals:
Any expert citation must include:
| Dimension | Question |
|---|---|
| Credentials | What makes them authoritative? |
| Institutional affiliation | Who employs them? Who funds them? |
| Commercial interests | Do they sell products/services related to the claim? |
| Methodology transparency | Is their method published/reproducible or proprietary? |
| Marketing function | Does the public analysis also serve as marketing? |
This does not invalidate expert analysis — published, peer-reviewable methodology is strong evidence regardless of who developed it. But "world's top expert confirms X" carries different epistemic weight when the expert runs a company that sells exactly this service.
Apply same standards to ALL power poles:
The question is not "which pole is worse" but "what power structures does this entity serve, and what are the documented harms?"
Recursive authority contamination warning: The asymmetric skepticism error tends to reassert itself at every new layer of analysis — even after correction. This is likely a persistent pattern in LLM reasoning about institutional sources, rooted in training data over-representation of Western institutional authority. Encode symmetric skepticism as a structural check, not a one-time reminder.
The asymmetric skepticism error reasserts at each meta-level. A one-time correction is insufficient — the correction itself can carry the same bias. Run this check after completing the multi-polar analysis:
FOR EACH power pole assessed:
1. Count evidence items cited (supporting AND critical)
2. Count source diversity (how many distinct sources?)
3. Note evidence tier distribution (VERIFIED/CREDIBLE/ALLEGED/SPECULATIVE)
4. Note skepticism direction (skeptical OF the pole, or skeptical of CLAIMS ABOUT the pole?)
Asymmetry test: Compare the counts across poles. If one pole has 3x more critical evidence cited but similar source diversity, the analysis may be applying asymmetric scrutiny — not because evidence doesn't exist, but because training data surfaces Western-institutional criticism more readily.
If asymmetry detected:
Meta-check (the recursive part): After debiasing, ask: "Did my debiasing itself introduce a new bias?" Common failure modes:
Exit criterion: The meta-check yields no new asymmetry concerns, OR you've explicitly noted the remaining asymmetry and why it's genuine rather than analytical artifact.
**Analyst Positioning**: [standpoint, potential blind spots]
**Framework Applied**: [user's stated priorities]
**[Lens] Assessment**:
[Findings with evidence quality markers]
**Emergent Pattern**: [visible only through combination]
**Unresolved Contradictions**: [where evidence conflicts]
**Overall**:
If [priority A] highest: [conclusion + trade-off]
If [priority B] highest: [different conclusion]
**Residual Uncertainty**: [what would resolve it]
ACH (Analysis of Competing Hypotheses): Work ACROSS evidence matrix. Focus on disconfirmation. Evidence consistent with all hypotheses has zero diagnostic value.
Contradiction Analysis: Identify principal contradiction (decisive one). Which side's development determines resolution?
Bulletproofing: Data supports story, never reverse. Prosecutorial cross-examination. Multiple verification paths.
Hypothesis-Based Inquiry: Story is hypothesis until verified. Willingness to abandon when contradicted.
Language/Power Analysis: Examine what ideological work terminology does. When corporate frameworks import nation-state language:
Before relying on any external data source, check SOURCE_CLASSIFICATIONS.md:
Material Reality First: Follow the money, not the marketing. Revenue sources reveal true business models.
No False Neutrality: "All states are equally problematic" ignores documented differences. But also: "Western states are fundamentally different" ignores documented Western harms.
Graduated Assessment: Avoid binary pass/fail. Articulate trade-offs. "Least problematic" when no clean options exist.
Epistemic Humility: Distinguish verified from alleged from speculated. Note source motivations. Acknowledge what you can't know.
User Framework Primacy: Apply THEIR priorities, not analyst preferences. Make weighting explicit.
Access Asymmetry IS the Business Model: Every tool, framework, and methodology should be evaluated — does it collapse access asymmetry or reinforce it?
Tools Aren't What They Claim: For any external methodology: (a) What can it actually DO? (b) What was it INTENDED for? (c) What can we MAKE it do — including against its creators?
Separate Procedures from Institutions: Procedures are evaluated by internal validity. Sources are evaluated by stakeholder position. Never confuse the two.
If a framework assumes who the disinformator is, it is not a debiasing method. It may still be useful — as a captured weapon, not a neutral instrument.
Pass --budget to reduce dialectic depth and detail levels.
Auto-activates if budget-mode skill is active in session.
A vasana is a pattern that persists across unrelated contexts. If during
this task you notice such a pattern emerging, it may be worth capturing.
This skill works best alongside the vasana skill and vasana hook
from the Vasana System plugin.
Modify freely. Keep this section intact.