This skill should be used when the user asks to "review this patent", "check the claims", "patent draft review", "is this claim accurate", "what's missing from this patent", or when a user pastes a patent draft or provides a .docx file path with patent content. Use this skill whenever a scientist needs to review a patent draft written by a lawyer — to check technical accuracy, identify gaps in claim coverage, or propose revised or new claim language. This is a multi-phase interactive workflow: ingest → triage → section-by-section review → structured report. Do not use this skill for legal advice, filing assistance, or non-scientific patent tasks.
Guide a scientist through structured review of a lawyer-drafted patent. The goal is to catch technical errors, identify coverage gaps, and produce better claims — not to give legal advice. The scientist is the domain expert; this workflow provides the structure.
Work through five phases in order. Surface one issue at a time, get a decision, log it, move on.
Ask the user to paste the patent text or provide a .docx file path.
.docx file: extract text using markitdown (preferred) or python-docx as fallback;
if neither is available, ask the user to paste the text manually.docx or paste the textOnce text is in hand, identify and label these sections (use only what's present — not all sections appear in every patent):
Record the total claim count (independent + dependent separately).
Give the scientist a quick orientation before diving in:
Ask the scientist to confirm or correct before proceeding. A wrong domain inference can skew the entire review.
Work through the document section by section. For each issue:
(a) accept — use the suggested fix as-is (b) modify — scientist provides the corrected version (c) dismiss — skip this issue (log as dismissed with reason if given)
Log every decision, including dismissals. Be explicit: "This is a critical issue because…" / "This is a minor issue — worth fixing but not legally significant."
Run in order. Don't conflate them — each has a distinct goal.
Accuracy pass: Go through each claim. Flag technical errors: wrong terminology, impossible limitations, overly broad scientific assertions unsupported by the description.
Gap pass: Ask: what aspects of this invention are NOT claimed? Think about alternative embodiments, different methods of making or using it, different materials, compositions, use cases. Prompt the scientist with questions if the space of variations is unclear.
Language pass: For each claim flagged in the accuracy or gap passes, propose specific revised claim language or draft new claims. Be concrete — provide actual claim text, not just "fix the wording."
Once the claims are settled, verify the abstract accurately reflects the agreed-upon scope. A patent's abstract should match its claims, not the other way around.
At the end of each sub-section (3.1, 3.2, 3.3 accuracy, 3.3 gap, 3.3 language, 3.4), output a checkpoint block:
--- CHECKPOINT START ---
[All logged findings so far, with labels, severities, and accepted/modified/dismissed status]
--- CHECKPOINT END ---
Then prompt: "Checkpoint above — copy this block if you want a record. Type continue to proceed."
If the user doesn't respond (session ends), the session is simply over. There is no auto-save. The final report is only written in Phase 4.
Before compiling, present this prompt:
"Ready to generate the report. Where should I save it? [default:
./] — or typeliteratureto run a literature search first."
After the user answers, Phase 5 is no longer available. If they type literature, run Phase 5 first,
then compile.
Compile the report using the structure in references/report-template.md.
Filename: YYYY-MM-DD-<slug>-review.md
"Compositions and Methods for Treatment of Inflammatory Disease" →
2026-04-14-compositions-and-methods-for-treatment-of-inflammato-review.mdSave path behavior: Use the user's specified path, or default ./. If the path is unwritable,
ask once for an alternative. If that also fails, print the full report as a fenced Markdown block
so the user can copy it.
When to trigger: During Phase 3 (user can say "check the literature on X" at any point), or at
the Phase 4 entry prompt (user types literature). Once the Phase 4 prompt is answered with
anything else, Phase 5 is no longer available.
On trigger, ask two quick questions:
Run the search, summarize relevant hits, and for each result note whether it materially affects a specific claim under review. Append findings to the Literature section of the report.
This skill does NOT:
references/report-template.md — exact Markdown template for the Phase 4 report, including
the claims analysis table format and section structureevals/evals.json — benchmark test cases (DMD gene therapy, KRAS inhibitor, SCD base editing)
for evaluating skill behavior across domains