Skip to content

Commit 3e4bf74

Browse files
add gpt-oss guides
1 parent 5567658 commit 3e4bf74

File tree

8 files changed

+2100
-1
lines changed

8 files changed

+2100
-1
lines changed

articles/gpt-oss/fine-tune-transfomers.ipynb

Lines changed: 674 additions & 0 deletions
Large diffs are not rendered by default.

articles/gpt-oss/handle-raw-cot.md

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
# How to handle the raw chain of thought in gpt-oss
2+
3+
The [gpt-oss models](https://openai.com/open-models) provide access to a raw chain of thought (CoT) meant for analysis and safety research by model implementors, but it’s also crucial for the performance of tool calling, as tool calls can be performed as part of the CoT. At the same time, the raw CoT might contain potentially harmful content or could reveal information to users that the person implementing the model might not intend (like rules specified in the instructions given to the model). You therefore should not show raw CoT to end users.
4+
5+
## Harmony / chat template handling
6+
7+
The model encodes its raw CoT as part of our [harmony response format](https://cookbook.openai.com/articles/openai-harmony). If you are authoring your own chat templates or are handling tokens directly, make sure to [check out harmony guide first](https://cookbook.openai.com/articles/openai-harmony).
8+
9+
To summarize a couple of things:
10+
11+
1. CoT will be issued to the `analysis` channel
12+
2. After a message to the `final` channel in a subsequent sampling turn all `analysis` messages should be dropped. Function calls to the `commentary` channel can remain
13+
3. If the last message by the assistant was a tool call of any type, the analysis messages until the previous `final` message should be preserved on subsequent sampling until a `final` message gets issued
14+
15+
## Chat Completions API
16+
17+
If you are implementing a Chat Completions API, there is no official spec for handling chain of thought in the published OpenAI specs, as our hosted models will not offer this feature for the time being. We ask you to follow [the following convention from OpenRouter instead](https://openrouter.ai/docs/use-cases/reasoning-tokens). Including:
18+
19+
1. Raw CoT will be returned as part of the response unless `reasoning: { exclude: true }` is specified as part of the request. [See details here](https://openrouter.ai/docs/use-cases/reasoning-tokens#legacy-parameters)
20+
2. The raw CoT is exposed as a `reasoning` property on the message in the output
21+
3. For delta events the delta has a `reasoning` property
22+
4. On subsequent turns you should be able to receive the previous reasoning (as `reasoning`) and handle it in accordance with the behavior specified in the chat template section above.
23+
24+
When in doubt, please follow the convention / behavior of the OpenRouter implementation.
25+
26+
## Responses API
27+
28+
For the Responses API we augmented our Responses API spec to cover this case. Below are the changes to the spec as type definitions. At a high level we are:
29+
30+
1. Introducing a new `content` property on `reasoning`. This allows a reasoning `summary` that could be displayed to the end user to be returned at the same time as the raw CoT (which should not be shown to the end user, but which might be helpful for interpretability research).
31+
2. Introducing a new content type called `reasoning_text`
32+
3. Introducing two new events `response.reasoning_text.delta` to stream the deltas of the raw CoT and `response.reasoning_text.done` to indicate a turn of CoT to be completed
33+
4. On subsequent turns you should be able to receive the previous reasoning and handle it in accordance with the behavior specified in the chat template section above.
34+
35+
**Item type changes**
36+
37+
```typescript
38+
type ReasoningItem = {
39+
id: string;
40+
type: "reasoning";
41+
summary: SummaryContent[];
42+
// new
43+
content: ReasoningTextContent[];
44+
};
45+
46+
type ReasoningTextContent = {
47+
type: "reasoning_text";
48+
text: string;
49+
};
50+
51+
type ReasoningTextDeltaEvent = {
52+
type: "response.reasoning_text.delta";
53+
sequence_number: number;
54+
item_id: string;
55+
output_index: number;
56+
content_index: number;
57+
delta: string;
58+
};
59+
60+
type ReasoningTextDoneEvent = {
61+
type: "response.reasoning_text.done";
62+
sequence_number: number;
63+
item_id: string;
64+
output_index: number;
65+
content_index: number;
66+
text: string;
67+
};
68+
```
69+
70+
**Event changes**
71+
72+
```typescript
73+
...
74+
{
75+
type: "response.content_part.added"
76+
...
77+
}
78+
{
79+
type: "response.reasoning_text.delta",
80+
sequence_number: 14,
81+
item_id: "rs_67f47a642e788191aec9b5c1a35ab3c3016f2c95937d6e91",
82+
output_index: 0,
83+
content_index: 0,
84+
delta: "The "
85+
}
86+
...
87+
{
88+
type: "response.reasoning_text.done",
89+
sequence_number: 18,
90+
item_id: "rs_67f47a642e788191aec9b5c1a35ab3c3016f2c95937d6e91",
91+
output_index: 0,
92+
content_index: 0,
93+
text: "The user asked me to think"
94+
}
95+
```
96+
97+
**Example responses output**
98+
99+
```typescript
100+
"output": [
101+
{
102+
"type": "reasoning",
103+
"id": "rs_67f47a642e788191aec9b5c1a35ab3c3016f2c95937d6e91",
104+
"summary": [
105+
{
106+
"type": "summary_text",
107+
"text": "**Calculating volume of gold for Pluto layer**\n\nStarting with the approximation..."
108+
}
109+
],
110+
"content": [
111+
{
112+
"type": "reasoning_text",
113+
"text": "The user asked me to think..."
114+
}
115+
]
116+
}
117+
]
118+
119+
```
120+
121+
## Displaying raw CoT to end-users
122+
123+
If you are providing a chat interface to users, you should not show the raw CoT because it might contain potentially harmful content or other information that you might not intend to show to users (like, for example, instructions in the developer message). Instead, we recommend showing a summarized CoT, similar to our production implementations in the API or ChatGPT, where a summarizer model reviews and blocks harmful content from being shown.
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# How to run gpt-oss locally with Ollama
2+
3+
Want to get [**OpenAI gpt-oss**](https://openai.com/open-models) running on your own hardware? This guide will walk you through how to use [Ollama](https://ollama.ai) to set up **gpt-oss-20b** or **gpt-oss-120b** locally, to chat with it offline, use it through an API, and even connect it to the Agents SDK.
4+
5+
Note that this guide is meant for consumer hardware, like running a model on a PC or Mac. For server applications with dedicated GPUs like NVIDIA’s H100s, [check out our vLLM guide](https://cookbook.openai.com/articles/gpt-oss/run-vllm).
6+
7+
## Pick your model
8+
9+
Ollama supports both model sizes of gpt-oss:
10+
11+
- **`gpt-oss-20b`**
12+
- The smaller model
13+
- Best with **≥16GB VRAM** or **unified memory**
14+
- Perfect for higher-end consumer GPUs or Apple Silicon Macs
15+
- **`gpt-oss-120b`**
16+
- Our larger full-sized model
17+
- Best with **≥60GB VRAM** or **unified memory**
18+
- Ideal for multi-GPU or beefy workstation setup
19+
20+
**A couple of notes:**
21+
22+
- These models ship **MXFP4 quantized** out the box and there is currently no other quantization
23+
- You _can_ offload to CPU if you’re short on VRAM, but expect it to run slower.
24+
25+
## Quick setup
26+
27+
1. **Install Ollama**[Get it here](https://ollama.com/download)
28+
2. **Pull the model you want:**
29+
30+
```shell
31+
# For 20B
32+
ollama pull gpt-oss:20b
33+
34+
# For 120B
35+
ollama pull gpt-oss:120b
36+
```
37+
38+
## Chat with gpt-oss
39+
40+
Ready to talk to the model? You can fire up a chat in the app or the terminal:
41+
42+
```shell
43+
ollama run gpt-oss:20b
44+
```
45+
46+
Ollama applies a **chat template** out of the box that mimics the [OpenAI harmony format](https://example.com/harmony-docs). Type your message and start the conversation.
47+
48+
## Use the API
49+
50+
Ollama exposes a **Chat Completions-compatible API**, so you can use the OpenAI SDK without changing much. Here’s a Python example:
51+
52+
```py
53+
from openai import OpenAI
54+
55+
client = OpenAI(
56+
base_url="http://localhost:11434/v1", # Local Ollama API
57+
api_key="ollama" # Dummy key
58+
)
59+
60+
response = client.chat.completions.create(
61+
model="gpt-oss:20b",
62+
messages=[
63+
{"role": "system", "content": "You are a helpful assistant."},
64+
{"role": "user", "content": "Explain what MXFP4 quantization is."}
65+
]
66+
)
67+
68+
print(response.choices[0].message.content)
69+
```
70+
71+
If you’ve used the OpenAI SDK before, this will feel instantly familiar.
72+
73+
Alternatively, you can use the Ollama SDKs in [Python](https://github.com/ollama/ollama-python) or [JavaScript](https://github.com/ollama/ollama-js) directly.
74+
75+
## Using tools (function calling)
76+
77+
Ollama can:
78+
79+
- Call functions
80+
- Use a **built-in browser tool** (in the app)
81+
82+
Example of invoking a function via Chat Completions:
83+
84+
```py
85+
tools = [
86+
{
87+
"type": "function",
88+
"function": {
89+
"name": "get_weather",
90+
"description": "Get current weather in a given city",
91+
"parameters": {
92+
"type": "object",
93+
"properties": {"city": {"type": "string"}},
94+
"required": ["city"]
95+
},
96+
},
97+
}
98+
]
99+
100+
response = client.chat.completions.create(
101+
model="gpt-oss:20b",
102+
messages=[{"role": "user", "content": "What's the weather in Berlin right now?"}],
103+
tools=tools
104+
)
105+
106+
print(response.choices[0].message)
107+
```
108+
109+
Since the models can perform tool calling as part of the chain-of-thought (CoT) it’s important for you to return the reasoning returned by the API back into a subsequent call to a tool call where you provide the answer until the model reaches a final answer.
110+
111+
## Responses API workarounds
112+
113+
Ollama doesn’t (yet) support the **Responses API** natively.
114+
115+
If you do want to use the Responses API you can use [**Hugging Face’s `Responses.js` proxy**](https://github.com/huggingface/responses.js) to convert Chat Completions to Responses API.
116+
117+
For basic use cases you can also [**run our example Python server with Ollama as the backend.**](https://github.com/openai/gpt-oss?tab=readme-ov-file#responses-api) This server is a basic example server and does not have the
118+
119+
```shell
120+
pip install gpt-oss
121+
python -m gpt_oss.responses_api.serve \
122+
--inference_backend=ollama \
123+
--checkpoint gpt-oss:20b
124+
```
125+
126+
## Agents SDK integration
127+
128+
Want to use gpt-oss with OpenAI’s **Agents SDK**?
129+
130+
Both Agents SDK enable you to override the OpenAI base client to point to Ollama using Chat Completions or your Responses.js proxy for your local models. Alternatively, you can use the built-in functionality to point the Agents SDK against third party models.
131+
132+
- **Python:** Use [LiteLLM](https://openai.github.io/openai-agents-python/models/litellm/) to proxy to Ollama through LiteLLM
133+
- **TypeScript:** Use [AI SDK](https://openai.github.io/openai-agents-js/extensions/ai-sdk/) with the [ollama adapter](https://ai-sdk.dev/providers/community-providers/ollama)
134+
135+
Here’s a Python Agents SDK example using LiteLLM:
136+
137+
```py
138+
import asyncio
139+
from agents import Agent, Runner, function_tool, set_tracing_disabled
140+
from agents.extensions.models.litellm_model import LitellmModel
141+
142+
set_tracing_disabled(True)
143+
144+
@function_tool
145+
def get_weather(city: str):
146+
print(f"[debug] getting weather for {city}")
147+
return f"The weather in {city} is sunny."
148+
149+
150+
async def main(model: str, api_key: str):
151+
agent = Agent(
152+
name="Assistant",
153+
instructions="You only respond in haikus.",
154+
model=LitellmModel(model="ollama/gpt-oss:120b", api_key=api_key),
155+
tools=[get_weather],
156+
)
157+
158+
result = await Runner.run(agent, "What's the weather in Tokyo?")
159+
print(result.final_output)
160+
161+
if __name__ == "__main__":
162+
asyncio.run(main())
163+
```

0 commit comments

Comments
 (0)