Adversarial verification for AI-generated legal content with systematic fact-checking, source validation, and quality control. Use when User requests verification of legal documents, fact-checking of regulatory content, red team review, or quality assurance before distribution to clients/stakeholders. Provides structured verification reports with severity-categorized errors, verified sources, and distribution readiness assessment.
lawvable268 estrellas3 mar 2026
Ocupación
Categorías
Base de Conocimientos
Contenido de la habilidad
Purpose
This skill provides systematic adversarial verification of AI-generated legal content to ensure factual accuracy, proper legal citations, and appropriate disclaimers before distribution to clients or stakeholders. It addresses the #1 concern about AI in legal practice: "How do I know this is accurate?"
When to Use This Skill
Use the Legal Red Team Verifier when User requests:
Verification of AI-generated legal content before client/stakeholder distribution
Fact-checking of legal snapshots, briefings, or analyses
Quality control on compliance documents, regulatory summaries, or legal reports
Red team review of legal outputs (e.g., "verify this", "fact-check this", "red team this document")
Adversarial testing of legal claims or arguments
Trigger phrases: "verify", "fact-check", "red team", "check accuracy", "validate sources", "quality control", "is this correct", "review for errors"
Core Verification Categories
1. FACTUAL ACCURACY
Skills relacionados
Regulatory dates and deadlines: Verify enforcement dates, compliance deadlines, transition periods
Article/section references: Confirm regulation articles, statutory sections, directive provisions exist and are cited correctly
Tertiary sources: News articles, blog posts (use with extreme caution)
Transparency Requirements
ALWAYS provide source URLs for verified facts
ALWAYS acknowledge when claims cannot be verified
ALWAYS disclose areas of legal uncertainty or debate
NEVER present speculation as fact
Adversarial Mindset
When performing verification, adopt an adversarial stance:
Assume error until proven correct: Don't trust AI-generated content
Seek contradictory evidence: Actively look for information that contradicts claims
Question every number: Independently verify all calculations
Demand sources: Every factual claim must have verifiable attribution
Test logical consistency: Look for internal contradictions
Challenge interpretations: Where AI presents legal interpretation, verify against authoritative sources
Quality Standards
5/5 - Distribution Ready
All factual claims verified with official sources
All legal citations confirmed accurate
All arithmetic independently validated
Appropriate disclaimers present
No critical or high-severity issues
Professional quality suitable for client/stakeholder distribution
4/5 - Minor Revisions
Factual claims verified but some moderate issues found
May have unsourced statistics that should be added
Disclaimers adequate but could be enhanced
No critical issues, only moderate or low severity
3/5 - Needs Revision
Some factual errors or unsupported claims found
Missing important disclaimers
High-severity issues present
Requires revision before distribution
2/5 - Major Corrections Required
Multiple factual errors identified
Significant legal citation problems
Critical issues present
Extensive revision needed
1/5 - Not Distribution Ready
Fundamental errors in core legal statements
Pervasive unsupported claims
Multiple critical issues
Requires complete rework
Customization by Jurisdiction
EU/German Law Focus
Prioritize EUR-Lex, German official gazettes (BGBl)
Verify BaFin, BSI, ENISA guidance
Check German statutory citations (BGB, BDSG, GeschGehG, etc.)
Verify EU directive transposition status for Germany
General Legal Verification
Adapt source hierarchy to relevant jurisdiction
Use appropriate official sources (gov websites, legal databases)
Adjust citation formats to jurisdiction standards
Modify disclaimer language as appropriate
Examples of Known AI Hallucination Patterns
Pattern 1: Plausible but Wrong Article Numbers
Problem: AI generates realistic-sounding article citations that don't exist
Example: "AI Act Article 42(5)" when AI Act only has Article 42(1)-(4)
Verification: Always check official EUR-Lex text for exact article structure
Pattern 2: Confident but Incorrect Dates
Problem: AI states dates with confidence but gets them wrong
Example: "NIS2 applies from October 2024" when actual date is October 17, 2024
Verification: Independently verify all dates against official sources
Pattern 3: Mixing Guidance and Legal Requirements
Problem: AI presents regulatory guidance as legal obligation
Example: Treating ENISA recommendations as binding NIS2 requirements
Verification: Distinguish between binding legal text and non-binding guidance
Pattern 4: Outdated Legal References
Problem: AI cites superseded or amended provisions
Example: Citing original GDPR text when regulation has been practically interpreted by CJEU
Verification: Check for amendments, implementing acts, and authoritative interpretations
Pattern 5: Arithmetic Errors in Timeline Calculation
Problem: AI makes mistakes calculating deadlines from effective dates
Example: Claiming "18 months from October 2024 is March 2026" (actually April 2026)
Verification: Independently calculate all timelines
Continuous Improvement
As you use this skill:
Document new hallucination patterns encountered
Refine verification methodology based on findings
Build library of reliable sources for different legal areas
Track error types to identify systematic AI weaknesses
Share learnings to improve legal AI verification practices
Critical Reminder
The purpose of this skill is adversarial verification. Approach every AI-generated legal claim with skepticism. Your role is not to confirm what the AI said, but to independently verify whether it's accurate, properly sourced, and appropriately disclaimed. When in doubt, verify. When you can't verify, flag it. Better to over-verify than to distribute inaccurate legal information.