Skip to content

Commit 3a06681

Browse files
committed
bitsandbytes
1 parent 2d8eb16 commit 3a06681

File tree

3 files changed

+22
-4
lines changed

3 files changed

+22
-4
lines changed

.azure/gpu-tests-fabric.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,17 @@ jobs:
123123
python requirements/pytorch/check-avail-extras.py
124124
displayName: "Env details"
125125
126+
- bash: |
127+
# get pytorch version
128+
PYTORCH_VERSION=$(python -c "import torch; print(torch.__version__.split('+')[0])")
129+
# FixMe: uninstall bitsandbytes for pytorch 2.6 as it is not compatible with `triton.ops`
130+
if [[ "${PYTORCH_VERSION}" == "2.6.0" ]]; then
131+
pip uninstall -y bitsandbytes
132+
else
133+
python -c "import bitsandbytes"
134+
fi
135+
displayName: "Handle bitsandbytes"
136+
126137
- bash: python -m pytest lightning_fabric
127138
workingDirectory: src
128139
# without succeeded this could run even if the job has already failed

.azure/gpu-tests-pytorch.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,17 @@ jobs:
137137
python requirements/pytorch/check-avail-extras.py
138138
displayName: "Env details"
139139
140+
- bash: |
141+
# get pytorch version
142+
PYTORCH_VERSION=$(python -c "import torch; print(torch.__version__.split('+')[0])")
143+
# FixMe: uninstall bitsandbytes for pytorch 2.6 as it is not compatible with `triton.ops`
144+
if [[ "${PYTORCH_VERSION}" == "2.6.0" ]]; then
145+
pip uninstall -y bitsandbytes
146+
else
147+
python -c "import bitsandbytes"
148+
fi
149+
displayName: "Handle bitsandbytes"
150+
140151
- bash: python -m pytest pytorch_lightning
141152
workingDirectory: src
142153
# without succeeded this could run even if the job has already failed

requirements/pytorch/check-avail-extras.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,3 @@
44
import matplotlib # noqa: F401
55
import omegaconf # noqa: F401
66
import rich # noqa: F401
7-
import torch
8-
9-
if torch.cuda.is_available():
10-
import bitsandbytes # noqa: F401

0 commit comments

Comments
 (0)