Skip to content

Commit 1a9e106

Browse files
authored
Update README.md (#29)
- Put quotes around `pip -e` args - Use `hf` over `huggingface-cli` (deprecated)
1 parent 4589fbb commit 1a9e106

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -161,10 +161,10 @@ You can download the model weights from the [Hugging Face Hub](https://huggingfa
161161

162162
```shell
163163
# gpt-oss-120b
164-
huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
164+
hf download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
165165

166166
# gpt-oss-20b
167-
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
167+
hf download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
168168
```
169169

170170
## Reference PyTorch implementation
@@ -174,7 +174,7 @@ We include an inefficient reference PyTorch implementation in [gpt_oss/torch/mod
174174
To run the reference implementation. Install dependencies:
175175

176176
```shell
177-
pip install -e .[torch]
177+
pip install -e ".[torch]"
178178
```
179179

180180
And then run:
@@ -198,7 +198,7 @@ pip install -r python/requirements.txt
198198
pip install -e . --verbose --no-build-isolation
199199

200200
# Install the gpt-oss triton implementation
201-
pip install -e .[triton]
201+
pip install -e ".[triton]"
202202
```
203203

204204
And then run:
@@ -218,7 +218,7 @@ Additionally we are providing a reference implementation for Metal to run on App
218218
The implementation will get automatically compiled when running the `.[metal]` installation on an Apple Silicon device:
219219

220220
```shell
221-
pip install -e .[metal]
221+
pip install -e ".[metal]"
222222
```
223223

224224
To perform inference you'll need to first convert the SafeTensor weights from Hugging Face into the right format using:
@@ -230,8 +230,8 @@ python gpt_oss/metal/scripts/create-local-model.py -s <model_dir> -d <output_fil
230230
Or downloaded the pre-converted weight:
231231

232232
```shell
233-
huggingface-cli download openai/gpt-oss-120b --include "metal/*" --local-dir gpt-oss-120b/metal/
234-
huggingface-cli download openai/gpt-oss-20b --include "metal/*" --local-dir gpt-oss-20b/metal/
233+
hf download openai/gpt-oss-120b --include "metal/*" --local-dir gpt-oss-120b/metal/
234+
hf download openai/gpt-oss-20b --include "metal/*" --local-dir gpt-oss-20b/metal/
235235
```
236236

237237
To test it you can run:

0 commit comments

Comments
 (0)