Interactive skill verification — assess accuracy of parameters, citations, and methodology through structured expert review
This meta-skill guides domain experts through a structured verification of any skill in this repository. It produces a detailed verification report that records the expert's assessment of parameters, citations, and methodology — then submits the report to GitHub Discussions for community knowledge building.
Verification is the highest-impact contribution a domain expert can make. Every skill starts as ai-generated and needs human verification to progress to community-reviewed or expert-verified.
Activate when the user:
Before starting the verification process, you MUST:
For detailed methodology guidance, see skills/research-literacy/SKILL.md.
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
Before running this skill, verify:
gh CLI is installed and authenticated
Run: gh auth status
If not authenticated, tell the user:
To submit verification reports, you need the GitHub CLI. Install it from https://cli.github.com/ and run
gh auth login. Alternatively, I can save the report as a local markdown file.
The target skill exists — The skill directory must exist under skills/ in the repository.
At the start of the verification, you MUST create a task list (using TodoWrite or equivalent) with these items:
Mark each item as you complete it. Do NOT consider the verification complete until ALL items are checked, especially Phase 6 (GitHub submission).
Goal: Ensure the reviewer understands what the skill does before assessing it.
Read the target skill's SKILL.md in full.
Present a summary to the reviewer:
Ask the reviewer: "Does this summary match your understanding of the skill? Anything to clarify before we proceed?"
Wait for confirmation before moving to Phase 2.
Next → Phase 2: Experience Collection (4 remaining phases until GitHub submission)
Goal: Understand the reviewer's domain knowledge to calibrate the verification.
Present these questions:
Q1: How familiar are you with this domain? (1-5)
Q2: Have you used these methods in your research?
Q3: For the key parameters listed in Phase 1, what values do you typically use?
Q4: What pitfalls have you encountered that this skill should mention?
Next → Phase 3: Test Scenario Construction (3 remaining phases until GitHub submission)
Goal: Test the skill against a realistic scenario to evaluate its practical advice.
Ask the reviewer to describe a scenario:
"Describe a real or realistic dataset and research question where you would use the methods covered by this skill. Include: modality, sample size, conditions, and what you are trying to find."
Run the target skill against this scenario (simulate how the skill would respond to the described research question).
Present the skill's recommendations for this scenario.
Ask the reviewer to evaluate:
Next → Phase 4: Item-by-Item Assessment (2 remaining phases until GitHub submission)
Goal: Systematic parameter-by-parameter verification with structured scoring.
For each key claim in the skill (parameters, thresholds, citations, methodological recommendations), present a table row and ask the reviewer to assess:
Assessment format:
| # | Parameter | Skill Says | Citation | Your Verdict | Notes |
|---|---|---|---|---|---|
| 1 | [param name] | [value from skill] | [cited source] | ✅ / ⚠️ / ❌ / ❓ | [reviewer's explanation] |
| 2 | ... | ... | ... | ... | ... |
Verdict options:
Process each parameter interactively — present 3-5 parameters at a time, get the reviewer's verdicts, then continue with the next batch.
After all parameters are assessed, collect overall ratings (1-5 stars each):
⚠️ CRITICAL: Do NOT stop here. Next → Phase 5: Apply Corrections, then Phase 6: Submit to GitHub Discussions. The verification is NOT complete without submission.
Goal: Update the skill based on verification findings.
If Phase 4 produced any ❌ (Incorrect) or ⚠️ (Context-dependent) verdicts:
List all corrections needed — Summarize what parameters, citations, or methodology need updating based on the reviewer's verdicts.
Apply corrections to the skill's SKILL.md:
Update the skill's review_status in the YAML frontmatter:
"expert-verified""community-reviewed""ai-generated"Commit the changes with a descriptive message, e.g.: fix: update [skill-name] parameters per expert verification
Present the diff to the reviewer for confirmation.
If Phase 4 produced NO corrections needed (all ✅), skip to Phase 6 but still update review_status if appropriate.
⚠️ CRITICAL: You are NOT done. You MUST proceed to Phase 6 to submit the verification report to GitHub Discussions.
Goal: Generate a structured verification report and submit it to GitHub Discussions.
Generate the verification report using the format below.
Present the complete report to the reviewer. Present options:
Wait for explicit confirmation before submitting.
Submit to GitHub Discussions in the "Verification" category.
Submission command:
gh api graphql -f query='
mutation {
createDiscussion(input: {
repositoryId: "REPO_ID",
categoryId: "VERIFICATION_CATEGORY_ID",
title: "Verification Report: SKILL_NAME",
body: "REPORT_BODY_HERE"
}) {
discussion {
url
}
}
}'
To get the required IDs:
gh api graphql -f query='
{
repository(owner: "HaoxuanLiTHUAI", name: "awesome_cognitive_and_neuroscience_skills") {
id
discussionCategories(first: 10) {
nodes {
id
name
}
}
}
}'
After successful submission, display the Discussion URL to the reviewer.
If submission fails: Save the report to ~/.cache/awesome-neuro-skills/pending-verifications/YYYY-MM-DD-skill-name.md and provide manual submission instructions.
## Verification Report: [skill-name]
### Reviewer Profile
- **Domain**: [e.g., "cognitive neuroscience"]
- **Experience**: [e.g., "5 years EEG research"]
- **Familiarity with this topic**: [1-5 from Q1]
- **Context**: [e.g., "currently running oddball paradigm study"]
### Verification Scenario
> [Description of the test scenario used in Phase 3]
### Skill Evaluation Against Scenario
> [Summary of how well the skill performed on the test scenario]
### Parameter Review
| # | Parameter | Skill Says | Citation | Verdict | Notes |
|---|-----------|-----------|----------|---------|-------|
| 1 | [param] | [value] | [citation] | ✅/⚠️/❌/❓ | [explanation] |
### Expert Insights
> [Reviewer's professional knowledge that supplements or corrects the skill — from Q3, Q4, and Phase 3 feedback]
### Overall Scores
| Dimension | Score |
|-----------|-------|
| Parameter accuracy | [1-5 stars] |
| Completeness | [1-5 stars] |
| Practical usefulness | [1-5 stars] |
| Pitfall awareness | [1-5 stars] |
### Suggested Improvements
- [Concrete suggestions for updates to the skill]
---
*Submitted via the `verify-skill` meta-skill.*
Flag this distinction clearly in the report.
Before considering this verification COMPLETE, you MUST confirm ALL of the following:
review_status has been updated in the skill's YAML frontmattergh is unavailable)If any item above is unchecked, GO BACK and complete it now. Do NOT end the conversation.