Use when drafting, restructuring, or polishing Chinese NSFC proposals (2026 template), especially when strict section-by-section gating, hypothesis-objective-content-problem consistency, literature verification via paper-search MCP, and anti-AI Chinese academic writing constraints are required.
This skill manages end-to-end NSFC proposal writing and polishing under the 2026 structure. It enforces section-level gates, cross-section consistency, literature verification, and restrained academic Chinese style.
Use two modes:
Before any drafting/revision action, the assistant must ask exactly one mode-selection question and wait for the user answer:
Write Mode (from scratch)Polish Mode (revise existing draft)Hard rules:
Write Mode or Polish Mode in the opening message, do not ask again; proceed directly with the specified mode.Collect before execution:
Academic literature retrieval must use paper-search MCP tools:
paper-search_search_pubmed (primary)paper-search_search_semantic (supplement)paper-search_search_google_scholar (coverage)paper-search_search_biorxiv / paper-search_search_medrxiv (recent preprints)Runtime name mapping (for environments using MCP namespaced tools):
paper-search_search_pubmed -> mcp__paper-search__search_pubmedpaper-search_search_semantic -> mcp__paper-search__search_semanticpaper-search_search_google_scholar -> mcp__paper-search__search_google_scholarpaper-search_search_biorxiv -> mcp__paper-search__search_biorxivpaper-search_search_medrxiv -> mcp__paper-search__search_medrxivDo not use generic web search/fetch tools for citation evidence in proposal claims.
Apply these resolutions when references conflict:
mapped_to_h fields on SQ nodes. Validate by traversing H and KSQ links from SQ references.Follow phased gates in order:
At each phase:
polish_review_report).Maintain and sync after each section edit:
data/consistency_map.jsondata/literature_index.jsoncontext_memory.mdproject_state.jsonhistory_log.jsonAny missing sync blocks phase progression.
data/mcp_literature_cache.json and run online validation without --offline whenever network is available. Final gate should enforce --require-mcp.Block progression when any of the following fails:
Use atomic gate command for final checks:
python scripts/state_manager.py --root . gate-check --sections-dir sections --index data/literature_index.json --p1 sections/P1_立项依据.md --ref sections/REF_参考文献.md --mcp-cache data/mcp_literature_cache.json --mcp-ttl-days 30 --require-mcpFailure handling playbook:
failed_at=sync: run sync-all --auto-fix, then re-run gate-check.failed_at=citation: repair index/cache, re-run verify-all --require-mcp, then gate-check.failed_at=matrix: run matrix-check and reorder, then gate-check.failed_at=review: fix D/C dimensions from review report, then gate-check.Load only what is needed:
references/00_设计方案_总览.mdreferences/01_目录结构与配置.mdreferences/02_核心机制.mdreferences/03_写作规范与反AI.mdreferences/04_文献管理.mdreferences/05_Write_Mode流程.mdreferences/06_Polish_Mode流程.mdreferences/07_自审与评审模块.mdreferences/08_脚本清单与合并规则.mdreferences/09_交互规范与回复模板.mdDeliverables should include:
sections/data/output/ (md/docx if requested)When reporting to user, state:
Use scripts under scripts/ from proposal project root:
python scripts/state_manager.py --root . initpython scripts/state_manager.py --root . load --section P1 --minimalpython scripts/state_manager.py --root . write-cycle --section P1 --token-budget 4000python scripts/state_manager.py --root . sync-all --auto-fixpython scripts/state_manager.py --root . gate-check --sections-dir sections --index data/literature_index.json --p1 sections/P1_立项依据.md --ref sections/REF_参考文献.md --mcp-cache data/mcp_literature_cache.json --mcp-ttl-days 30 --require-mcppython scripts/consistency_mapper.py --path data/consistency_map.json validatepython scripts/consistency_mapper.py --path data/consistency_map.json validate-one V-01python scripts/citation_validator.py verify-all --index data/literature_index.json --p1 sections/P1_立项依据.md --mcp-cache data/mcp_literature_cache.json --mcp-ttl-days 30 --require-mcp --manual-review data/manual_review_queue.json --log data/verification_run_log.jsonpython scripts/citation_validator.py verify-entry --index data/literature_index.json --p1 sections/P1_立项依据.md --ref-number 1 --mcp-cache data/mcp_literature_cache.json --require-mcppython scripts/citation_validator.py matrix-check --p1 sections/P1_立项依据.md --index data/literature_index.json --ref sections/REF_参考文献.mdpython scripts/humanizer_zh.py scan-all sectionspython scripts/diagnosis_engine.py full-review --sections-dir sections --consistency data/consistency_map.json --index data/literature_index.json --p1 sections/P1_立项依据.md --ref sections/REF_参考文献.md --output data/diagnosis_report.jsonpython scripts/diagnosis_engine.py polish-review --sections-dir sections --consistency data/consistency_map.json --index data/literature_index.json --p1 sections/P1_立项依据.md --ref sections/REF_参考文献.md --json-output data/diagnosis_report.json --md-output data/polish_review_report.mdpython scripts/section_merger.py validate-order --sections-dir sectionspython scripts/section_merger.py merge --sections-dir sections --output output/申请书_合并.mdpython scripts/section_merger.py merge --sections-dir sections --only P1_立项依据.md,P2,REF_参考文献.md --output output/阶段稿_合并.mdpython scripts/word_counter.py summary sectionsThese scripts are production-ready workflow utilities for iterative proposal drafting and polishing.
python3 -m unittest discover -s tests -p 'test_*.py'