Skip to content

Commit a8d1bbd

Browse files
authored
Update transformers to avoid triton errors. (#2051)
1 parent 7da367e commit a8d1bbd

File tree

2 files changed

+3
-7
lines changed

2 files changed

+3
-7
lines changed

articles/gpt-oss/run-colab.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@
6565
},
6666
"outputs": [],
6767
"source": [
68-
"!pip install -q git+https://github.com/huggingface/transformers triton==3.4 kernels"
68+
"!pip install -q transformers triton==3.4 kernels"
6969
]
7070
},
7171
{
@@ -244,4 +244,4 @@
244244
},
245245
"nbformat": 4,
246246
"nbformat_minor": 0
247-
}
247+
}

articles/gpt-oss/run-transformers.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,11 +29,7 @@ If you use `bfloat16` instead of MXFP4, memory consumption will be larger (\~48
2929
It’s recommended to create a fresh Python environment. Install transformers, accelerate, as well as the Triton kernels for MXFP4 compatibility:
3030

3131
```bash
32-
pip install -U transformers accelerate torch triton kernels
33-
```
34-
35-
```bash
36-
pip install git+https://github.com/triton-lang/triton.git@main#subdirectory=python/triton_kernels
32+
pip install -U transformers accelerate torch triton==3.4 kernels
3733
```
3834

3935
2. **(Optional) Enable multi-GPU**

0 commit comments

Comments
 (0)