You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,16 +18,16 @@ We're releasing two flavors of these open models:
18
18
-`gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)
19
19
-`gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
20
20
21
-
Both models were trained on our [harmony response format][harmony] and should only be used with the harmony format as it will not work correctly otherwise.
21
+
Both models were trained using our [harmony response format][harmony] and should only be used with this format; otherwise, they will not work correctly.
22
22
23
23
### Highlights
24
24
25
25
-**Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
26
26
-**Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
27
-
-**Full chain-of-thought:**Gain complete access to the model's reasoning process, facilitating easier debugging and increased trust in outputs. It's not intended to be shown to end users.
27
+
-**Full chain-of-thought:**Provides complete access to the model's reasoning process, facilitating easier debugging and greater trust in outputs. This information is not intended to be shown to end users.
28
28
-**Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
29
29
-**Agentic capabilities:** Use the models' native capabilities for function calling, [web browsing](#browser), [Python code execution](#python), and Structured Outputs.
30
-
-**Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making`gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b`model run within 16GB of memory.
30
+
-**Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, allowing`gpt-oss-120b`to run on a single H100 GPU and `gpt-oss-20b`to run within 16GB of memory..
31
31
32
32
### Inference examples
33
33
@@ -402,7 +402,7 @@ To improve performance the tool caches requests so that the model can revisit a
402
402
403
403
### Python
404
404
405
-
The model was trained to use using a python tool to perform calculations and other actions as part of its chain-of-thought. During the training the model used a stateful tool which makes running tools between CoT loops easier. This reference implementation, however, uses a stateless mode. As a result the PythonTool defines its own tool description to override the definition in [`openai-harmony`][harmony].
405
+
The model was trained to use a python tool to perform calculations and other actions as part of its chain-of-thought. During the training the model used a stateful tool which makes running tools between CoT loops easier. This reference implementation, however, uses a stateless mode. As a result the PythonTool defines its own tool description to override the definition in [`openai-harmony`][harmony].
406
406
407
407
> [!WARNING]
408
408
> This implementation runs in a permissive Docker container which could be problematic in cases like prompt injections. It's serving as an example and you should consider implementing your own container restrictions in production.
0 commit comments