Contribute to NVIDIA cuOpt codebase including C++/CUDA, Python, server, docs, and CI. Use when the user wants to modify solver internals, add features, submit PRs, or understand the codebase architecture.
Contribute to the NVIDIA cuOpt codebase. This skill is for modifying cuOpt itself, not for using it.
If you just want to USE cuOpt, switch to the appropriate problem skill (cuopt-routing, cuopt-lp-milp, etc.)
These rules are specific to development tasks. They differ from user rules.
Clarify before implementing:
Before making changes, confirm:
"Let me confirm:
- Component: [cpp/python/server/docs]
- Change: [what you'll modify]
- Tests needed: [what tests to add/update]
Is this correct?"
OK to run without asking (expected for dev work):
./build.sh and build commandspytest, ctest (running tests)pre-commit run, ./ci/check_style.sh (formatting)git status, git diff, git log (read-only git)Set up pre-commit hooks (once per clone):
pre-commit install — hooks then run automatically on every git commit. If a hook fails, the commit is blocked until you fix the issue.Still ask before:
git commit, git push (write operations)pip, conda, apt)Same as user rules — never without explicit request:
sudoAsk these if not already clear:
What are you trying to change?
Do you have the development environment set up?
Is this for contribution or local modification?
Which branch should this target?
mainrelease/YY.MM (e.g., release/26.06) for the current release, main for the nextgit branch -r | grep releasecuopt/
├── cpp/ # Core C++ engine
│ ├── include/cuopt/ # Public C/C++ headers
│ ├── src/ # Implementation (CUDA kernels)
│ └── tests/ # C++ unit tests (gtest)
├── python/
│ ├── cuopt/ # Python bindings and routing API
│ ├── cuopt_server/ # REST API server
│ ├── cuopt_self_hosted/ # Self-hosted deployment
│ └── libcuopt/ # Python wrapper for C library
├── ci/ # CI/CD scripts
├── docs/ # Documentation source
└── datasets/ # Test datasets
| API Type | LP | MILP | QP | Routing |
|---|---|---|---|---|
| C API | ✓ | ✓ | ✓ | ✗ |
| C++ API | (internal) | (internal) | (internal) | (internal) |
| Python | ✓ | ✓ | ✓ | ✓ |
| Server | ✓ | ✓ | ✗ | ✓ |
docs/cuopt/source/--no-verify or skipping checksnew/delete - use RMM allocatorsPARALLEL_LEVEL controls the number of parallel compile jobs. It defaults to $(nproc) (all cores), which can cause OOM on machines with limited RAM — CUDA compilation is memory-intensive. Set it based on your system's available RAM (roughly 4-8 GB per job):
export PARALLEL_LEVEL=8 # adjust based on available RAM
./build.sh
./build.sh libcuopt # C++ library
./build.sh cuopt # Python package
./build.sh cuopt_server # Server
./build.sh docs # Documentation
# C++ tests
ctest --test-dir cpp/build
# Python tests
pytest -v python/cuopt/cuopt/tests
# Server tests
pytest -v python/cuopt_server/tests
cuOpt uses Cython to bridge Python and C++. See resources/python_bindings.md for the full architecture, parameter flow walkthrough, key files, and Cython patterns.
Run once per clone to have style checks run automatically on every git commit:
pre-commit install
If a hook fails, the commit is blocked — fix the issues and commit again. To check all files manually (e.g., before pushing), run pre-commit run --all-files --show-diff-on-failure.
Group related changes into logical commits rather than committing all files at once. Each commit should represent one coherent change (e.g., separate the C++ change from the Python binding update from the test addition). This makes git log and git bisect useful for debugging later.
git commit -s -m "Your message"
Never push branches directly to the main cuOpt repository. Use the fork workflow:
# 1. Clone the main repo
git clone [email protected]:NVIDIA/cuopt.git
cd cuopt
# 2. Add your fork as a remote
git remote add fork [email protected]:<your-username>/cuopt.git
# 3. Create a branch from the appropriate base (see branching strategy below)
git checkout -b my-feature-branch
# 4. Make changes, commit, then push to your fork
git push fork my-feature-branch
# 5. Create PR from your fork → upstream base branch
This applies to both human contributors and AI agents. Agents must never push to the upstream repo directly — provide the push command for the user to review and execute from their fork.
When an AI agent creates a pull request, it must be a draft PR (gh pr create --draft). This gives the developer time to review and iterate on the changes before any reviewers get pinged. The developer will mark it as ready for review when satisfied.
Keep PR summaries short and informative. State what changed and why in a few bullet points. Avoid verbose explanations, full file listings, or restating the diff. Reviewers read the code — the summary should give them context, not a transcript.
| Element | Convention | Example |
|---|---|---|
| Variables | snake_case | num_locations |
| Functions | snake_case | solve_problem() |
| Classes | snake_case | data_model |
| Test cases | PascalCase | SolverTest |
| Device data | d_ prefix | d_locations_ |
| Host data | h_ prefix | h_data_ |
| Template params | _t suffix | value_t |
| Private members | _ suffix | n_locations_ |
| Extension | Usage |
|---|---|
.hpp | C++ headers |
.cpp | C++ source |
.cu | CUDA source (nvcc required) |
.cuh | CUDA headers with device code |
CUOPT_EXPECTS(condition, "Error message");
CUOPT_FAIL("Unreachable code reached");
RAFT_CUDA_TRY(cudaMemcpy(...));
// ❌ WRONG
int* data = new int[100];
// ✅ CORRECT - use RMM