Skip to content

Commit ec7914d

Browse files
Update README.md
1 parent 7e64492 commit ec7914d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Welcome to the gpt-oss series, [OpenAI's open-weight models](https://openai.com/
1515

1616
We're releasing two flavors of these open models:
1717

18-
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)
18+
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
1919
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
2020

2121
Both models were trained using our [harmony response format][harmony] and should only be used with this format; otherwise, they will not work correctly.
@@ -27,7 +27,7 @@ Both models were trained using our [harmony response format][harmony] and should
2727
- **Full chain-of-thought:** Provides complete access to the model's reasoning process, facilitating easier debugging and greater trust in outputs. This information is not intended to be shown to end users.
2828
- **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
2929
- **Agentic capabilities:** Use the models' native capabilities for function calling, [web browsing](#browser), [Python code execution](#python), and Structured Outputs.
30-
- **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, allowing `gpt-oss-120b` to run on a single H100 GPU and `gpt-oss-20b` to run within 16GB of memory.
30+
- **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, allowing `gpt-oss-120b` to run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and `gpt-oss-20b` to run within 16GB of memory.
3131

3232
### Inference examples
3333

0 commit comments

Comments
 (0)