Citation Management workflow skill. Use this skill when the user needs Manage citations systematically throughout the research and writing process and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
This public intake copy packages plugins/antigravity-awesome-skills-claude/skills/citation-management from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses EXTERNAL_SOURCE.json plus ORIGIN.md as the provenance anchor for review.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Visual Enhancement with Scientific Schematics, Search Strategies, Tools and Scripts, Common Pitfalls to Avoid, Integration with Other Skills, Dependencies.
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | EXTERNAL_SOURCE.json | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | ORIGIN.md | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | SKILL.md | Starts with the smallest copied file that materially changes execution |
| Supporting context | SKILL.md | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | ## Related Skills | Helps the operator switch to a stronger native skill when the task drifts |
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
Citation management follows a systematic process:
Goal: Find relevant papers using academic search engines.
Google Scholar provides the most comprehensive coverage across disciplines.
Basic Search:
# Search for papers on a topic
python scripts/search_google_scholar.py "CRISPR gene editing" \
--limit 50 \
--output results.json
# Search with year filter
python scripts/search_google_scholar.py "machine learning protein folding" \
--year-start 2020 \
--year-end 2024 \
--limit 100 \
--output ml_proteins.json
Advanced Search Strategies (see references/google_scholar_search.md):
"deep learning"author:LeCunintitle:"neural networks"machine learning -surveyBest Practices:
PubMed specializes in biomedical and life sciences literature (35+ million citations).
Basic Search:
# Search PubMed
python scripts/search_pubmed.py "Alzheimer's disease treatment" \
--limit 100 \
--output alzheimers.json
# Search with MeSH terms and filters
python scripts/search_pubmed.py \
--query '"Alzheimer Disease"[MeSH] AND "Drug Therapy"[MeSH]' \
--date-start 2020 \
--date-end 2024 \
--publication-types "Clinical Trial,Review" \
--output alzheimers_trials.json
Advanced PubMed Queries (see references/pubmed_search.md):
"Diabetes Mellitus"[MeSH]"cancer"[Title], "Smith J"[Author]AND, OR, NOT2020:2024[Publication Date]"Review"[Publication Type]Best Practices:
Goal: Convert paper identifiers (DOI, PMID, arXiv ID) to complete, accurate metadata.
For single DOIs, use the quick conversion tool:
# Convert single DOI
python scripts/doi_to_bibtex.py 10.1038/s41586-021-03819-2
# Convert multiple DOIs from a file
python scripts/doi_to_bibtex.py --input dois.txt --output references.bib
# Different output formats
python scripts/doi_to_bibtex.py 10.1038/nature12345 --format json
For DOIs, PMIDs, arXiv IDs, or URLs:
# Extract from DOI
python scripts/extract_metadata.py --doi 10.1038/s41586-021-03819-2
# Extract from PMID
python scripts/extract_metadata.py --pmid 34265844
# Extract from arXiv ID
python scripts/extract_metadata.py --arxiv 2103.14030
# Extract from URL
python scripts/extract_metadata.py --url "https://www.nature.com/articles/s41586-021-03819-2"
# Batch extraction from file (mixed identifiers)
python scripts/extract_metadata.py --input identifiers.txt --output citations.bib
Metadata Sources (see references/metadata_extraction.md):
CrossRef API: Primary source for DOIs
PubMed E-utilities: Biomedical literature
arXiv API: Preprints in physics, math, CS, q-bio
DataCite API: Research datasets, software, other resources
What Gets Extracted:
Goal: Generate clean, properly formatted BibTeX entries.
See references/bibtex_formatting.md for complete guide.
Common Entry Types:
@article: Journal articles (most common)@book: Books@inproceedings: Conference papers@incollection: Book chapters@phdthesis: Dissertations@misc: Preprints, software, datasetsRequired Fields by Type:
@article{citationkey,
author = {Last1, First1 and Last2, First2},
title = {Article Title},
journal = {Journal Name},
year = {2024},
volume = {10},
number = {3},
pages = {123--145},
doi = {10.1234/example}
}
@inproceedings{citationkey,
author = {Last, First},
title = {Paper Title},
booktitle = {Conference Name},
year = {2024},
pages = {1--10}
}
@book{citationkey,
author = {Last, First},
title = {Book Title},
publisher = {Publisher Name},
year = {2024}
}
Use the formatter to standardize BibTeX files:
# Format and clean BibTeX file
python scripts/format_bibtex.py references.bib \
--output formatted_references.bib
# Sort entries by citation key
python scripts/format_bibtex.py references.bib \
--sort key \
--output sorted_references.bib
# Sort by year (newest first)
python scripts/format_bibtex.py references.bib \
--sort year \
--descending \
--output sorted_references.bib
# Remove duplicates
python scripts/format_bibtex.py references.bib \
--deduplicate \
--output clean_references.bib
# Validate and report issues
python scripts/format_bibtex.py references.bib \
--validate \
--report validation_report.txt
Formatting Operations:
Goal: Verify all citations are accurate and complete.
# Validate BibTeX file
python scripts/validate_citations.py references.bib
# Validate and fix common issues
python scripts/validate_citations.py references.bib \
--auto-fix \
--output validated_references.bib
# Generate detailed validation report
python scripts/validate_citations.py references.bib \
--report validation_report.json \
--verbose
Validation Checks (see references/citation_validation.md):
DOI Verification:
Required Fields:
Data Consistency:
Duplicate Detection:
Format Compliance:
Validation Output:
{
"total_entries": 150,
"valid_entries": 145,
"errors": [
{
"citation_key": "Smith2023",
"error_type": "missing_field",
"field": "journal",
"severity": "high"
},
{
"citation_key": "Jones2022",
"error_type": "invalid_doi",
"doi": "10.1234/broken",
"severity": "high"
}
],
"warnings": [
{
"citation_key": "Brown2021",
"warning_type": "possible_duplicate",
"duplicate_of": "Brown2021a",
"severity": "medium"
}
]
}
Complete workflow for creating a bibliography:
# 1. Search for papers on your topic
python scripts/search_pubmed.py \
'"CRISPR-Cas Systems"[MeSH] AND "Gene Editing"[MeSH]' \
--date-start 2020 \
--limit 200 \
--output crispr_papers.json
# 2. Extract DOIs from search results and convert to BibTeX
python scripts/extract_metadata.py \
--input crispr_papers.json \
--output crispr_refs.bib
# 3. Add specific papers by DOI
python scripts/doi_to_bibtex.py 10.1038/nature12345 >> crispr_refs.bib
python scripts/doi_to_bibtex.py 10.1126/science.abcd1234 >> crispr_refs.bib
# 4. Format and clean the BibTeX file
python scripts/format_bibtex.py crispr_refs.bib \
--deduplicate \
--sort year \
--descending \
--output references.bib
# 5. Validate all citations
python scripts/validate_citations.py references.bib \
--auto-fix \
--report validation.json \
--output final_references.bib
# 6. Review validation report and fix any remaining issues
cat validation.json
# 7. Use in your LaTeX document
# \bibliography{final_references}
This skill complements the literature-review skill:
Literature Review Skill → Systematic search and synthesis Citation Management Skill → Technical citation handling
Combined Workflow:
literature-review for comprehensive multi-database searchcitation-management to extract and validate all citationsliterature-review to synthesize findings thematicallycitation-management to verify final bibliography accuracy# After completing literature review
# Verify all citations in the review document
python scripts/validate_citations.py my_review_references.bib --report review_validation.json
# Format for specific citation style if needed
python scripts/format_bibtex.py my_review_references.bib \
--style nature \
--output formatted_refs.bib
# Step 1: Find key papers on your topic
python scripts/search_google_scholar.py "transformer neural networks" \
--year-start 2017 \
--limit 50 \
--output transformers_gs.json
python scripts/search_pubmed.py "deep learning medical imaging" \
--date-start 2020 \
--limit 50 \
--output medical_dl_pm.json
# Step 2: Extract metadata from search results
python scripts/extract_metadata.py \
--input transformers_gs.json \
--output transformers.bib
python scripts/extract_metadata.py \
--input medical_dl_pm.json \
--output medical.bib
# Step 3: Add specific papers you already know
python scripts/doi_to_bibtex.py 10.1038/s41586-021-03819-2 >> specific.bib
python scripts/doi_to_bibtex.py 10.1126/science.aam9317 >> specific.bib
# Step 4: Combine all BibTeX files
cat transformers.bib medical.bib specific.bib > combined.bib
# Step 5: Format and deduplicate
python scripts/format_bibtex.py combined.bib \
--deduplicate \
--sort year \
--descending \
--output formatted.bib
# Step 6: Validate
python scripts/validate_citations.py formatted.bib \
--auto-fix \
--report validation.json \
--output final_references.bib
# Step 7: Review any issues
cat validation.json | grep -A 3 '"errors"'
# Step 8: Use in LaTeX
# \bibliography{final_references}
# You have a text file with DOIs (one per line)
# dois.txt contains:
# 10.1038/s41586-021-03819-2
# 10.1126/science.aam9317
# 10.1016/j.cell.2023.01.001
# Convert all to BibTeX
python scripts/doi_to_bibtex.py --input dois.txt --output references.bib
# Validate the result
python scripts/validate_citations.py references.bib --verbose
# You have a messy BibTeX file from various sources
# Clean it up systematically
# Step 1: Format and standardize
python scripts/format_bibtex.py messy_references.bib \
--output step1_formatted.bib
# Step 2: Remove duplicates
python scripts/format_bibtex.py step1_formatted.bib \
--deduplicate \
--output step2_deduplicated.bib
# Step 3: Validate and auto-fix
python scripts/validate_citations.py step2_deduplicated.bib \
--auto-fix \
--output step3_validated.bib
# Step 4: Sort by year
python scripts/format_bibtex.py step3_validated.bib \
--sort year \
--descending \
--output clean_references.bib
# Step 5: Final validation report
python scripts/validate_citations.py clean_references.bib \
--report final_validation.json \
--verbose
# Review report
cat final_validation.json
# Find highly cited papers on a topic
python scripts/search_google_scholar.py "AlphaFold protein structure" \
--year-start 2020 \
--year-end 2024 \
--sort-by citations \
--limit 20 \
--output alphafold_seminal.json
# Extract the top 10 by citation count
# (script will have included citation counts in JSON)
# Convert to BibTeX
python scripts/extract_metadata.py \
--input alphafold_seminal.json \
--output alphafold_refs.bib
# The BibTeX file now contains the most influential papers
Manage citations systematically throughout the research and writing process. This skill provides tools and strategies for searching academic databases (Google Scholar, PubMed), extracting accurate metadata from multiple sources (CrossRef, PubMed, arXiv), validating citation information, and generating properly formatted BibTeX entries.
Critical for maintaining citation accuracy, avoiding reference errors, and ensuring reproducible research. Integrates seamlessly with the literature-review skill for comprehensive research workflows.
The citation-management skill provides:
Use this skill to maintain accurate, complete citations throughout your research and ensure publication-ready bibliographies.
When creating documents with this skill, always consider adding scientific diagrams and schematics to enhance visual communication.
If your document does not already contain schematics or diagrams:
For new documents: Scientific schematics should be generated by default to visually represent key concepts, workflows, architectures, or relationships described in the text.
How to generate schematics:
python scripts/generate_schematic.py "your diagram description" -o figures/output.png
The AI will automatically:
When to add schematics:
For detailed guidance on creating schematics, refer to the scientific-schematics skill documentation.
Use @citation-management to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Review @citation-management against EXTERNAL_SOURCE.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Use @citation-management for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Review @citation-management using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
Start broad, then narrow:
Use multiple sources:
Leverage citations:
Document your searches:
Always use DOIs when available:
Verify extracted metadata:
Handle edge cases:
Maintain consistency:
Follow conventions:
Keep it clean:
Organize systematically:
Validate early and often:
Fix issues promptly:
Manual review for critical citations:
Symptoms: The result ignores the upstream workflow in plugins/antigravity-awesome-skills-claude/skills/citation-management, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open EXTERNAL_SOURCE.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Symptoms: Reviewers can see the generated SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
@burp-suite-testing - Use when the work is better handled by that native specialization after this imported skill establishes context.@burpsuite-project-parser - Use when the work is better handled by that native specialization after this imported skill establishes context.@business-analyst - Use when the work is better handled by that native specialization after this imported skill establishes context.@busybox-on-windows - Use when the work is better handled by that native specialization after this imported skill establishes context.Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
references | copied reference notes, guides, or background material from upstream | references/n/a |
examples | worked examples or reusable prompts copied from upstream | examples/n/a |
scripts | upstream helper scripts that change execution or validation | scripts/n/a |
agents | routing or delegation notes that are genuinely part of the imported package | agents/n/a |
assets | supporting assets or schemas copied from the source package | assets/n/a |
References (in references/):
google_scholar_search.md: Complete Google Scholar search guidepubmed_search.md: PubMed and E-utilities API documentationmetadata_extraction.md: Metadata sources and field requirementscitation_validation.md: Validation criteria and quality checksbibtex_formatting.md: BibTeX entry types and formatting rulesScripts (in scripts/):
search_google_scholar.py: Google Scholar search automationsearch_pubmed.py: PubMed E-utilities API clientextract_metadata.py: Universal metadata extractorvalidate_citations.py: Citation validation and verificationformat_bibtex.py: BibTeX formatter and cleanerdoi_to_bibtex.py: Quick DOI to BibTeX converterAssets (in assets/):
bibtex_template.bib: Example BibTeX entries for all typescitation_checklist.md: Quality assurance checklistSearch Engines:
Metadata APIs:
Tools and Validators:
Citation Styles:
Finding Seminal and High-Impact Papers (CRITICAL):
Always prioritize papers based on citation count, venue quality, and author reputation:
Citation Count Thresholds:
| Paper Age | Citations | Classification |
|---|---|---|
| 0-3 years | 20+ | Noteworthy |
| 0-3 years | 100+ | Highly Influential |
| 3-7 years | 100+ | Significant |
| 3-7 years | 500+ | Landmark Paper |
| 7+ years | 500+ | Seminal Work |
| 7+ years | 1000+ | Foundational |
Venue Quality Tiers:
Author Reputation Indicators:
Search Strategies for High-Impact Papers:
source:Nature or source:Scienceauthor:LastNameAdvanced Operators (full list in references/google_scholar_search.md):
"exact phrase" # Exact phrase matching