Use when determining author order on research manuscripts, assigning CRediT contributor roles for transparency, documenting individual contributions to collaborative projects, or resolving authorship disputes in multi-institutional research. Generates fair and transparent authorship assignments following ICMJE guidelines and CRediT taxonomy. Helps research teams document contributions, resolve disputes, and ensure equitable credit distribution in academic publications.
scripts/main.py.references/ for task-specific guidance.Python: 3.10+. Repository baseline for current packaged skills.dataclasses: unspecified. Declared in requirements.txt.cd "20260318/scientific-skills/Academic Writing/authorship-credit-gen"
python -m py_compile scripts/main.py
python scripts/main.py --help
Example run plan:
CONFIG block or documented parameters if the script uses fixed settings.python scripts/main.py with the validated inputs.See ## Workflow above for related details.
scripts/main.py.references/ contains supporting rules, prompts, or checklists.Use this command to verify that the packaged script entry point can be parsed before deeper execution.
python -m py_compile scripts/main.py
Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.
python -m py_compile scripts/main.py
python scripts/main.py --help
from scripts.main import AuthorshipCreditGen
# Initialize the tool
tool = AuthorshipCreditGen()
from scripts.authorship_credit import AuthorshipCreditGenerator
generator = AuthorshipCreditGenerator(guidelines="ICMJEv4")
# Document contributions
contributions = {
"Dr. Sarah Chen": [
"Conceptualization",
"Methodology",
"Writing - Original Draft",
"Supervision"
],
"Dr. Michael Roberts": [
"Data Curation",
"Formal Analysis",
"Writing - Review & Editing"
],
"Dr. Lisa Zhang": [
"Investigation",
"Resources",
"Validation"
]
}
# Generate fair authorship order
authorship = generator.determine_order(
contributions=contributions,
criteria=["intellectual_input", "execution", "writing", "supervision"],
weights={"intellectual_input": 0.4, "execution": 0.3, "writing": 0.2, "supervision": 0.1}
)
print(f"First author: {authorship.first_author}")
print(f"Corresponding: {authorship.corresponding_author}")
print(f"Author order: {authorship.ordered_list}")
# Generate CRediT statement
credit_statement = generator.generate_credit_statement(
contributions=contributions,
format="journal_submission"
)
# Check for disputes
dispute_check = generator.check_equity_issues(authorship)
if dispute_check.has_issues:
print(f"Recommendations: {dispute_check.recommendations}")
Analyze contributions using weighted criteria to determine equitable author ranking.
# Define weighted contribution criteria
weights = {
"conceptualization": 0.25,
"methodology_design": 0.20,
"data_collection": 0.15,
"analysis": 0.15,
"manuscript_writing": 0.15,
"supervision": 0.10
}
# Calculate contribution scores
scores = tool.calculate_contribution_scores(
contributions=team_contributions,
weights=weights
)
# Generate ordered author list
authorship_order = tool.generate_author_order(scores)
print(f"Recommended order: {authorship_order}")
Map contributions to official CRediT (Contributor Roles Taxonomy) categories.
# Map contributions to CRediT roles
credit_roles = tool.assign_credit_roles(
contributions=contributions,
version="CRediT_2021"
)
# Generate CRediT statement for journal
statement = tool.generate_credit_statement(
roles=credit_roles,
format="JATS_XML"
)
# Validate role assignments
validation = tool.validate_credit_roles(credit_roles)
if validation.is_valid:
print("CRediT roles properly assigned")
Identify potential authorship disputes before submission.
# Analyze contribution distribution
equity_analysis = tool.analyze_equity(
contributions=contributions,
thresholds={"min_substantial": 0.15}
)
# Flag potential issues
if equity_analysis.has_inequities:
for issue in equity_analysis.issues:
print(f"Warning: {issue.description}")
print(f"Recommendation: {issue.recommendation}")
# Generate equity report
report = tool.generate_equity_report(equity_analysis)
Create formatted contributor statements for various journal requirements.
# Generate for Nature-style statement
nature_statement = tool.generate_contributor_statement(
style="Nature",
include_competing_interests=True
)
# Generate for Science-style statement
science_statement = tool.generate_contributor_statement(
style="Science",
include_author_contributions=True
)
# Export in multiple formats
tool.export_statement(
statement=nature_statement,
formats=["docx", "pdf", "txt"]
)
python scripts/main.py --contributions contributions.json --guidelines ICMJE --output authorship_order.json
Before using this skill, ensure you have:
After using this skill, verify:
references/guide.md - Comprehensive user guidereferences/examples/ - Working code examplesreferences/api-docs/ - Complete API documentationSkill ID: 766 | Version: 1.0 | License: MIT
Every final response should make these items explicit when they are relevant:
scripts/main.py fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.This skill accepts requests that match the documented purpose of authorship-credit-gen and include enough context to complete the workflow safely.
Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:
authorship-credit-genonly handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.
Use the following fixed structure for non-trivial requests:
If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.