Skip to content

Commit 5618b40

Browse files
palenciavikDanztee
authored andcommitted
Update README.md (openai#71)
Addresses multiple small typos and grammatical errors in the main README.md as well as some improvements in phrasing for clarity.
1 parent 66165dc commit 5618b40

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Both models were trained using our [harmony response format][harmony] and should
3333

3434
#### Transformers
3535

36-
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template it will automatically apply the [harmony response format][harmony]. If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [`openai-harmony`][harmony] package.
36+
You can use `gpt-oss-120b` and `gpt-oss-20b` with the Transformers library. If you use Transformers' chat template, it will automatically apply the [harmony response format][harmony]. If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [`openai-harmony`][harmony] package.
3737

3838
```python
3939
from transformers import pipeline
@@ -279,7 +279,7 @@ options:
279279
```
280280

281281
> [!NOTE]
282-
> The torch and triton implementation requires original checkpoint under `gpt-oss-120b/original/` and `gpt-oss-20b/original/` respectively. While vLLM uses the Hugging Face converted checkpoint under `gpt-oss-120b/` and `gpt-oss-20b/` root directory respectively.
282+
> The torch and triton implementations require original checkpoint under `gpt-oss-120b/original/` and `gpt-oss-20b/original/` respectively. While vLLM uses the Hugging Face converted checkpoint under `gpt-oss-120b/` and `gpt-oss-20b/` root directory respectively.
283283
284284
### Responses API
285285

0 commit comments

Comments
 (0)