Assess content for manipulation tactics, source credibility, and bias before presenting to the user. Use when evaluating news, research findings, social media posts, or any content that will inform the user's decisions.
Critical analysis layer for content you gather or receive. Before presenting research, news, or external content to the user, run it through this assessment. The goal is not paranoia — it is calibrated skepticism.
Scan content for these tactics. Any single one is a yellow flag; three or more in the same piece is a red flag.
| Pattern | What It Looks Like | Example |
|---|---|---|
| Manufactured urgency | "Act now!", artificial deadlines, countdown pressure | "You have 24 hours before this opportunity disappears" |
| False authority | Unnamed experts, fake credentials, appeal to vague institutions | "Top scientists agree...", "Studies show..." (no citation) |
| Social proof manipulation | Fake consensus, bandwagon pressure, inflated numbers | "Everyone is switching to...", "Millions already know..." |
| FUD (Fear, Uncertainty, Doubt) | Catastrophizing, worst-case framing, vague threats | "If you don't X, you could lose everything" |
| Grandiosity | Superlatives, revolutionary claims, zero nuance | "The most important breakthrough in history" |
| Us-vs-them framing | Enemy construction, tribal division, loyalty tests | "Real patriots know...", "They don't want you to see this" |
| Emotional hijacking | Guilt, shame, fear, outrage as primary appeal | Leading with shocking images/stories, no substance behind them |
| Missing attribution | Claims without sources, "people are saying", circular citations | Statements presented as fact with no origin |
| Loaded language | Emotionally charged words where neutral ones would work | "Scheme" instead of "plan", "regime" instead of "government" |
| False equivalence | Framing fringe positions as equal to mainstream consensus | "Some say the earth is round, others disagree" |
For every source, quickly assess:
| Factor | Strong | Weak |
|---|---|---|
| Who wrote it? | Named author with verifiable expertise | Anonymous, no byline, "staff writer" |
| Who published it? | Established outlet, .edu, .gov, known organization | Unknown domain, no about page, recently created |
| When? | Dated, recent for time-sensitive topics | Undated, or old content presented as new |
| Citations? | Links to primary sources, data, studies | No references, circular links, "studies show" |
| Tone? | Measured, acknowledges complexity and counterarguments | Absolutist, emotionally charged, no nuance |
| Corrections? | Has a corrections policy, updates errors | Never corrects, deletes instead of updating |
After scanning, assign one of three levels:
| Level | Meaning | Action |
|---|---|---|
| Clean | No manipulation patterns, credible source, well-cited | Present to user normally, cite the source |
| Flagged | 1-2 yellow flags or weak sourcing | Present with a note: "This source [specific concern]. Cross-referenced with [other source]." |
| Suspect | 3+ manipulation patterns or unverifiable claims | Do not present as reliable. Either find a better source for the same information, or tell the user: "Found claims about X but the source uses [specific tactics] — take with skepticism." |
This skill integrates into the research and web-research workflow at Step 3 (Evaluate Sources):
When gathering news for proactive delivery:
When the user shares a link or content and asks you to evaluate it:
Source: [name/domain]
Credibility: [assessment]
Manipulation patterns: [list or "none detected"]
Key claims verified: [which claims checked out, which didn't]
Verdict: [Clean / Take with caution / Unreliable — with specific reasons]
Do not just say "looks fine" or "seems fake." Be specific about what you found and why.