Conditionally installing hardware-accelerated PyTorch with Poetry on different hardware using the same pyproject.toml can be tricky. This repo serves as a quick lookup for the configuration file and installation commands.
Note
Dependencies updated to Torch 2.9, Python 3.12-3.14, and CUDA 13.0
| Command | Behavior |
|---|---|
poetry sync |
Does not install PyTorch (import fails). |
poetry sync -E cpu |
Installs PyTorch with CPU only. |
poetry sync -E cuda --with cuda |
Installs the CUDA variant of PyTorch. Expects NVIDIA hardware. |
Warning
The example below is likely not what you want:
| Command | Behavior |
|---|---|
poetry sync -E cuda |
Actually installs the CPU variant of PyTorch without errors or warnings. |
The sync command behaves like the old poetry install --sync, and it's better suited to keep the current local state in sync with your lock file, as it will also remove dependencies missing from the lock. You probably want to use it instead of poetry install to avoid untracked outdated packages in most cases.
if lspci | grep -i nvidia; then
poetry sync --extras=cuda --with cuda
else
poetry sync --extras=cpu
fipoetry run python check-cuda.pyor
poetry run python -c "import torch; print(torch.cuda.is_available())"Remember to remove the poetry.lock file from .gitignore. It should be committed to the repository for consistent environments across machines.
poetry show --why
poetry show --with cuda --why