Use this skill when updating dependencies managed by uv: bumping a package version, upgrading the uv tool itself, updating torch/CUDA stack, switching transformers version, or regenerating the lockfile. Trigger: 'update dependency', 'bump version', 'upgrade uv', 'update torch', 'update lockfile', 'uv sync fails'.
Read .agents/knowledge/uv.md for the full dependency architecture. The key things that make VeOmni's uv setup non-trivial:
gpu, npu, npu_aarch64) are mutually conflictinguv is pinned to a specific version. Update all three locations together:
pyproject.toml -> [tool.uv] -> required-version = "==X.Y.Z"docker/cuda/Dockerfile.cu129 -> COPY --from=ghcr.io/astral-sh/uv:X.Y.Zdocker/ascend/Dockerfile.ascend_* -> same pattern (if present)Then regenerate the lockfile:
uv lock
uv sync --extra gpu --dev
Verify the lockfile diff is reasonable (git diff uv.lock — should only show version changes, not wholesale rewrites).
pyproject.toml under [project.dependencies] or the relevant [project.optional-dependencies] extra.uv lock
uv sync --extra gpu --dev
pytest tests/pyproject.toml and uv.lock together.This is the most complex update. torch versions are pinned in multiple places:
For GPU (gpu extra):
pyproject.toml -> [project.optional-dependencies] -> gpu listpyproject.toml -> [tool.uv] -> override-dependencies (the extra == 'gpu' entries)pyproject.toml -> [tool.uv.sources] -> torch (direct wheel URL — must update to matching wheel)torchvision, torchaudio, torchcodec, nvidia-cudnn-cu12For NPU (npu / npu_aarch64 extras):
+cpu suffix or no suffixSteps:
pyproject.toml (extras, overrides, sources)flash-attn / flash-attn-3 wheel compatibility — these are tied to specific torch versions via direct URLs in [tool.uv.sources]torchcodec version if needed (compatibility note in pyproject.toml)uv lock
uv sync --extra gpu --dev
pytest tests/transformers uses a dual-track setup: default 4.57.3 (group transformers-stable) and experimental 5.2.0 (extra transformers5-exp), declared as conflicts in [tool.uv.conflicts].
Bump within a track (e.g. 4.57.3 → 4.58.0, or 5.2.0 → 5.3.0):
pyproject.toml:
[dependency-groups] -> transformers-stable[project.optional-dependencies] -> transformers5-expuv lock
# Stable:
uv sync --extra gpu --dev
# Or v5:
uv sync --no-group transformers-stable --extra transformers5-exp --extra gpu --dev
AutoModelForVision2Seq removed (use AutoModelForImageTextToText)no_init_weights moved from transformers.modeling_utils to transformers.initialization__init__.py version gates: is_transformers_version_greater_or_equal_to("5.0.0") chooses generated/ (v5) vs upstream + apply_*_patch() (v4)qwen3_5, glm_moe_dsa) only register on >= 5.2.0pytest tests/models/ tests/e2e/make patchgen (with the target transformers installed)When uv.lock is out of sync or corrupt:
uv lock
uv sync --extra gpu --dev
If uv lock fails due to version conflicts, check:
[tool.uv] -> conflicts declarationsoverride-dependencies markersdocker/ Dockerfiles, otherwise CI builds will fail.torch but not torchvision/torchaudio/torchcodec to matching versions causes import errors.uv.lock together. Docker builds use --locked which requires the lockfile to match.extra == 'gpu' markers in overrides are critical. Removing them causes uv to download wrong torch variants from PyPI.flash-attn and flash-attn-3 are listed under no-build-isolation-package. They require torch to be installed first. If sync fails, try uv sync without these extras first, then add them.