Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,31 @@ Both models were trained using our [harmony response format][harmony] and should
- **Agentic capabilities:** Use the models' native capabilities for function calling, [web browsing](#browser), [Python code execution](#python), and Structured Outputs.
- **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, allowing `gpt-oss-120b` to run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and `gpt-oss-20b` to run within 16GB of memory.

## Table of Contents
- [Inference Examples](#inference-examples)
- [About this Repository](#about-this-repository)
- [Setup](#setup)
- [Requirements](#requirements)
- [Installation](#installation)
- [Download the model](#download-the-model)
- [Reference Implementations](#reference-implementations)
- [PyTorch](#reference-pytorch-implementation)
- [Triton](#reference-triton-implementation-single-gpu)
- [Metal (Apple Silicon)](#reference-metal-implementation)
- [Harmony format & tools](#harmony-format--tools)
- [Clients](#clients)
- [Terminal Chat](#terminal-chat)
- [Responses API](#responses-api)
- [Codex](#codex)
- [Tools](#tools)
- [Browser](#browser)
- [Python](#python)
- [Apply Patch](#apply-patch)
- [Other details](#other-details)
- [Precision format](#precision-format)
- [Recommended Sampling Parameters](#recommended-sampling-parameters)
- [Contributing](#contributing)

### Inference examples

#### Transformers
Expand Down