Execute Sprint 1 tasks for FASA optimization and trade analytics. This skill should be used when the user requests execution of any Sprint 1 task (Tasks 1.1-1.4, 11-13, 2.1-2.4, 3.1-3.2), including cap space parsing, Sleeper integration, FASA target marts, FA acquisition history, roster depth analysis, enhanced FASA targets, notebooks, valuation models, trade analysis, automation workflows, or documentation. Each task is atomic, standalone, and designed for independent execution with built-in validation. Current focus: Phase 2 FASA Intelligence (Tasks 11-13, 1.4).
Execute Sprint 1 tasks for the Fantasy Football Analytics FASA optimization and trade intelligence sprint. This skill provides structured, atomic task execution with validation and progress tracking.
Use this skill proactively when:
Sprint Goal: Deliver actionable FASA bidding strategy for Week 9 + trade analysis infrastructure Sprint End: Wednesday 2025-10-29 11:59 PM EST Current Status: 🟡 Phase 2 In Progress (Phase 1 ✅ Complete)
Sprint Structure:
When user requests task execution:
Identify the task - Determine which task (1.1 through 3.2) from user request
Load task file - Read the corresponding task file from references/:
Phase 1: Foundation (Complete ✅)
references/01_task_cap_space_foundation.mdreferences/02_task_sleeper_production_integration.mdreferences/03_task_fasa_target_mart.mdPhase 2: FASA Intelligence (Current Focus 🟡)
references/11_task_fa_acquisition_history.mdreferences/12_task_league_roster_depth.mdreferences/13_task_enhance_fasa_targets.mdreferences/04_task_fasa_strategy_notebook.mdPhase 3: Trade Intelligence (Deferred ⏸️)
references/05_task_historical_backfill.md (🟡 partial)references/06_task_baseline_valuation_model.md (❌ blocked)references/07_task_trade_target_marts.md (❌ blocked)references/08_task_trade_analysis_notebook.md (❌ blocked)Phase 4: Automation (Future ⏸️)
references/09_task_github_actions_workflows.mdreferences/10_task_documentation_polish.mdCheck dependencies - Review the "Dependencies" section in the task file
Implement the task - Follow the task file's implementation steps:
00_SPRINT_PLAN.md: Load references/00_SPRINT_PLAN.md and extract the full SQL/code from the specified line rangesValidate the implementation - Run all validation commands from task file:
make lintcheck, make typecheck, make sqlcheckmake lintfix or make sqlfix, re-run dbt tests afterward to confirm linters didn't break anythingmake dbt-test --select [models]Report results - Communicate clearly to user:
Commit changes - Use the suggested commit message from task file:
Update progress tracking - After successful commit:
references/README.md (⬜ → ✅)references/00_SPRINT_PLAN.md Progress Tracking sectionEach task file contains these sections (use them in order):
src/ingest/ or src/ff_analytics_utils/scripts/ingest/make lint and make typecheckdbt/ff_data_transform/models/staging/ or dbt/ff_data_transform/models/core/ or dbt/ff_data_transform/models/marts/.sql and .yml files.ymlEXTERNAL_ROOT environment variable before dbt commandsmake sqlcheck for SQL lintingmake sqlfix for auto-formatting, always re-run dbt tests to confirm no functional changesmake dbt-run --select and make dbt-test --selectnotebooks/ directoryuv run jupyter nbconvert --execute --to notebook --inplace.github/workflows/make targets firstworkflow_dispatch for manual triggeringdocs/analytics/ or docs/dev/Before executing tasks, ensure:
# Set environment variables
export SLEEPER_LEAGUE_ID="1230330435511275520"
export EXTERNAL_ROOT="$PWD/data/raw"
# Verify project dependencies
uv sync
# Verify dbt is working
make dbt-run --select dim_player
uv run python [script] --help
make lint
make typecheck
export EXTERNAL_ROOT="$PWD/data/raw"
make dbt-run --select [model_name]
make dbt-test --select [model_name]
dbt show --select [model_name] --limit 10
make sqlcheck
# If running linter auto-fix:
make sqlfix
# IMPORTANT: Re-run tests after linter changes
make dbt-test --select [model_name]
ls -lh data/raw/[source]/[dataset]/dt=*/
uv run python -c "
import polars as pl
df = pl.read_parquet('data/raw/[source]/[dataset]/dt=*/*.parquet')
print(df.head())
print(f'Rows: {len(df)}')
"
Check current sprint progress in references/README.md Task Index table.
Full sprint plan and technical specifications in references/00_SPRINT_PLAN.md.
If validation fails:
references/README.md)If blocked by dependencies:
Phase 1: Foundation (✅ COMPLETE)
Phase 2: FASA Intelligence (🟡 CURRENT FOCUS)
Phase 3: Trade Intelligence (⏸️ DEFERRED - blocked)
Phase 4: Automation (⏸️ FUTURE)
See references/README.md dependency diagram for full details.
When reporting task completion:
✅ Sprint 1 Task X.Y Complete: [Task Name]
**Implemented:**
- [List of files created/modified]
**Validation Results:**
- ✅ All tests passing (X/X)
- ✅ Code quality checks passed
- ✅ Success criteria met
**Committed:**
- Commit message: [exact message from task file]
- Branch: [current branch]
**Next Steps:**
- Task X.Z: [Next task name] [priority level]
- [or] Sprint 1 Phase X complete! Ready for Phase Y
If issues encountered:
⚠️ Sprint 1 Task X.Y: Partial Completion / Issues
**Completed:**
- [What worked]
**Issues:**
- ❌ [Specific error/failure]
- ⚠️ [Warnings or concerns]
**Validation Status:**
- ✅ [Passed checks]
- ❌ [Failed checks]
**Recommended Action:**
- [Specific next steps to resolve]