Systematically evaluate scholarly and research work using the ScholarEval framework. Use when assessing academic papers, research proposals, literature reviews, or scholarly writing for quality, rigor, and publication readiness. Triggers: evaluate paper, scholar evaluation, research quality assessment, peer review scoring, publication readiness, academic paper review, rate research quality, ScholarEval.
Apply the ScholarEval framework to systematically evaluate scholarly and research work. This skill provides structured evaluation methodology based on peer-reviewed research assessment criteria, enabling comprehensive analysis of academic papers, research proposals, literature reviews, and scholarly writing across multiple quality dimensions.
Mindset
Evaluation is a service to the author, not a verdict. Three principles govern every assessment:
Evidence first — every strength and weakness claim must cite a specific section, figure, or sentence from the work. Generic statements ("the methodology is weak") without evidence are useless.
Stage-appropriate expectations — a first draft is not a submission-ready manuscript. ALWAYS adjust thresholds to the work's stated stage and purpose before scoring.
Constructive framing — identify what needs improving and why it matters, not just what is wrong. The evaluation is complete only when the author knows what to do next.
Verwandte Skills
# Load the evaluation framework before scoring any dimension
# See references/evaluation_framework.md for detailed rubrics
When to Use This Skill
Use this skill when:
Evaluating research papers for quality and rigor
Assessing literature review comprehensiveness and quality
Reviewing research methodology design
Scoring data analysis approaches
Evaluating scholarly writing and presentation
Providing structured feedback on academic work
Benchmarking research quality against established criteria
Assessing publication readiness for target venues
Providing quantitative evaluation to complement qualitative peer review
When Not to Use
The goal is factual verification (claim checking), not quality assessment — use a fact-check or reproducibility workflow instead
The work requires domain-specific technical review beyond the ScholarEval dimensions (e.g., clinical safety review, legal analysis)
The author has requested a positive endorsement rather than an honest evaluation — do not produce a biased report
Visual Enhancement with Scientific Schematics
When creating documents with this skill, always consider adding scientific diagrams and schematics to enhance visual communication.
If your document does not already contain schematics or diagrams:
Use the scientific-schematics skill to generate AI-powered publication-quality diagrams
Simply describe your desired diagram in natural language
Claude will automatically generate, review, and refine the schematic
For new documents: Scientific schematics should be generated by default to visually represent key concepts, workflows, architectures, or relationships described in the text.
Create publication-quality images with proper formatting
Review and refine through multiple iterations
Ensure accessibility (colorblind-friendly, high contrast)
Save outputs in the figures/ directory
When to add schematics:
Evaluation framework diagrams
Quality assessment criteria decision trees
Scholarly workflow visualizations
Assessment methodology flowcharts
Scoring rubric visualizations
Evaluation process diagrams
Any complex concept that benefits from visualization
For detailed guidance on creating schematics, refer to the scientific-schematics skill documentation.
Evaluation Workflow
Step 1: Initial Assessment and Scope Definition
Begin by identifying the type of scholarly work being evaluated and the evaluation scope:
Work Types:
Full research paper (empirical, theoretical, or review)
Research proposal or protocol
Literature review (systematic, narrative, or scoping)
Thesis or dissertation chapter
Conference abstract or short paper
Evaluation Scope:
Comprehensive (all dimensions)
Targeted (specific aspects like methodology or writing)
Comparative (benchmarking against other work)
Ask the user to clarify if the scope is ambiguous.
Step 2: Dimension-Based Evaluation
Systematically evaluate the work across the ScholarEval dimensions. For each applicable dimension, assess quality, identify strengths and weaknesses, and provide scores where appropriate.
Refer to references/evaluation_framework.md for detailed criteria and rubrics for each dimension.
Core Evaluation Dimensions:
Problem Formulation & Research Questions
Clarity and specificity of research questions
Theoretical or practical significance
Feasibility and scope appropriateness
Novelty and contribution potential
Literature Review
Comprehensiveness of coverage
Critical synthesis vs. mere summarization
Identification of research gaps
Currency and relevance of sources
Proper contextualization
Methodology & Research Design
Appropriateness for research questions
Rigor and validity
Reproducibility and transparency
Ethical considerations
Limitations acknowledgment
Data Collection & Sources
Quality and appropriateness of data
Sample size and representativeness
Data collection procedures
Source credibility and reliability
Analysis & Interpretation
Appropriateness of analytical methods
Rigor of analysis
Logical coherence
Alternative explanations considered
Results-claims alignment
Results & Findings
Clarity of presentation
Statistical or qualitative rigor
Visualization quality
Interpretation accuracy
Implications discussion
Scholarly Writing & Presentation
Clarity and organization
Academic tone and style
Grammar and mechanics
Logical flow
Accessibility to target audience
Citations & References
Citation completeness
Source quality and appropriateness
Citation accuracy
Balance of perspectives
Adherence to citation standards
Step 3: Scoring and Rating
For each evaluated dimension, provide:
Qualitative Assessment:
Key strengths (2-3 specific points)
Areas for improvement (2-3 specific points)
Critical issues (if any)
Quantitative Scoring (Optional):
Use a 5-point scale where applicable:
5: Excellent - Exemplary quality, publishable in top venues
4: Good - Strong quality with minor improvements needed
3: Adequate - Acceptable quality with notable areas for improvement
1: Poor - Fundamental issues requiring major revision
To calculate aggregate scores programmatically, use scripts/calculate_scores.py.
Step 4: Synthesize Overall Assessment
Provide an integrated evaluation summary:
Overall Quality Assessment - Holistic judgment of the work's scholarly merit
Major Strengths - 3-5 key strengths across dimensions
Critical Weaknesses - 3-5 primary areas requiring attention
Priority Recommendations - Ranked list of improvements by impact
Publication Readiness (if applicable) - Assessment of suitability for target venues
Step 5: Provide Actionable Feedback
Transform evaluation findings into constructive, actionable feedback:
Feedback Structure:
Specific - Reference exact sections, paragraphs, or page numbers
Actionable - Provide concrete suggestions for improvement
Prioritized - Rank recommendations by importance and feasibility
Balanced - Acknowledge strengths while addressing weaknesses
Evidence-based - Ground feedback in evaluation criteria
Feedback Format Options:
Structured report with dimension-by-dimension analysis
Annotated comments mapped to specific document sections
Executive summary with key findings and recommendations
Comparative analysis against benchmark standards
Step 6: Contextual Considerations
Adjust evaluation approach based on:
Stage of Development:
Early draft: Focus on conceptual and structural issues
Advanced draft: Focus on refinement and polish
Final submission: Comprehensive quality check
Purpose and Venue:
Journal article: High standards for rigor and contribution
Conference paper: Balance novelty with presentation clarity
Student work: Educational feedback with developmental focus
Grant proposal: Emphasis on feasibility and impact
Discipline-Specific Norms:
STEM fields: Emphasis on reproducibility and statistical rigor
Social sciences: Balance quantitative and qualitative standards
Humanities: Focus on argumentation and scholarly interpretation
Resources
references/evaluation_framework.md
Detailed evaluation criteria, rubrics, and quality indicators for each ScholarEval dimension. Load this reference when conducting evaluations to access specific assessment guidelines and scoring rubrics.
Search patterns for quick access:
"Problem Formulation criteria"
"Literature Review rubric"
"Methodology assessment"
"Data quality indicators"
"Analysis rigor standards"
"Writing quality checklist"
scripts/calculate_scores.py
Python script for calculating aggregate evaluation scores from dimension-level ratings. Supports weighted averaging, threshold analysis, and score visualization.
Maintain Objectivity - Base evaluations on established criteria, not personal preferences
Be Comprehensive - Evaluate all applicable dimensions systematically
Provide Evidence - Support assessments with specific examples from the work
Stay Constructive - Frame weaknesses as opportunities for improvement
Consider Context - Adjust expectations based on work stage and purpose
Document Rationale - Explain the reasoning behind assessments and scores
Encourage Strengths - Explicitly acknowledge what the work does well
Prioritize Feedback - Focus on high-impact improvements first
Anti-Patterns
NEVER score without evidence
WHY: Unsupported scores are worse than no scores — they create false confidence and give the author nothing actionable to fix.
BAD — no evidence cited:
Methodology: 3/5 — adequate.
GOOD — specific evidence supports the score:
Methodology: 3/5 — the cross-validation protocol is sound (Section 3.2),
but the train/test split is not stratified, risking class-imbalance leakage (Section 3.3).
NEVER skip dimensions because they appear weak
WHY: Omitted dimensions are not "neutral" — they read as tacit approval and leave gaps the author cannot address.
BAD Skip "Data Quality" because the paper's data section is thin. → GOOD Evaluate and score it low with specific notes on what is missing.
NEVER treat all work types as equivalent
WHY: A first-year PhD proposal and a Nature submission are held to different standards. Applying journal-level criteria to a draft proposal produces misleading, demoralising feedback.
BAD Apply publication-readiness criteria to a draft proposal. → GOOD ALWAYS confirm the work type and adjust thresholds before scoring.
NEVER produce a purely negative report
WHY: Feedback without acknowledged strengths demotivates authors and misses the full picture of the work's quality.
BAD List only weaknesses without acknowledging what the work does well. → GOOD ALWAYS lead with concrete strengths before addressing areas for improvement.