Skip to content

Commit 7706142

Browse files
Merge branch 'main' into updating_function_calls_with_chat_models_cookbook
2 parents b79069c + 872c3ec commit 7706142

File tree

2 files changed

+21
-8
lines changed

2 files changed

+21
-8
lines changed

articles/openai-harmony.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The [`gpt-oss` models](https://openai.com/open-models) were trained on the harmo
66

77
### Roles
88

9-
Every message that the model processes has a role associated with it. The model knows about three types of roles:
9+
Every message that the model processes has a role associated with it. The model knows about five types of roles:
1010

1111
| Role | Purpose |
1212
| :---------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -91,14 +91,14 @@ convo = Conversation.from_messages(
9191
Message.from_role_and_content(Role.USER, "What is the weather in Tokyo?"),
9292
Message.from_role_and_content(
9393
Role.ASSISTANT,
94-
'User asks: "What is the weather in Tokyo?" We need to use get_weather tool.',
94+
'User asks: "What is the weather in Tokyo?" We need to use get_current_weather tool.',
9595
).with_channel("analysis"),
9696
Message.from_role_and_content(Role.ASSISTANT, '{"location": "Tokyo"}')
9797
.with_channel("commentary")
98-
.with_recipient("functions.get_weather")
98+
.with_recipient("functions.get_current_weather")
9999
.with_content_type("<|constrain|> json"),
100100
Message.from_author_and_content(
101-
Author.new(Role.TOOL, "functions.lookup_weather"),
101+
Author.new(Role.TOOL, "functions.get_current_weather"),
102102
'{ "temperature": 20, "sunny": true }',
103103
).with_channel("commentary"),
104104
]
@@ -376,7 +376,7 @@ If the model decides to call a tool it will define a `recipient` in the header o
376376
The model might also specify a `<|constrain|>` token to indicate the type of input for the tool call. In this case since it’s being passed in as JSON the `<|constrain|>` is set to `json`.
377377

378378
```
379-
<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
379+
<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
380380
```
381381

382382
#### Handling tool calls
@@ -392,7 +392,7 @@ A tool message has the following format:
392392
So in our example above
393393

394394
```
395-
<|start|>functions.get_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|>
395+
<|start|>functions.get_current_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|>
396396
```
397397

398398
Once you have gathered the output for the tool calls you can run inference with the complete content:
@@ -432,10 +432,10 @@ locations: string[],
432432
format?: "celsius" | "fahrenheit", // default: celsius
433433
}) => any;
434434
435-
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
435+
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_current_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
436436
```
437437

438-
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
438+
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_current_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
439439

440440
#### Preambles
441441

registry.yaml

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
tags:
1313
- gpt-oss
1414
- open-models
15+
- gpt-oss-providers
1516

1617
- title: How to run gpt-oss locally with LM Studio
1718
path: articles/gpt-oss/run-locally-lmstudio.md
@@ -21,6 +22,8 @@
2122
tags:
2223
- gpt-oss
2324
- open-models
25+
- gpt-oss-local
26+
2427
- title: GPT-5 Prompt Migration and Improvement Using the New Optimizer
2528
path: examples/gpt-5/prompt-optimization-cookbook.ipynb
2629
date: 2025-08-07
@@ -76,6 +79,7 @@
7679
tags:
7780
- gpt-oss
7881
- open-models
82+
- gpt-oss-server
7983

8084
- title: Using NVIDIA TensorRT-LLM to run gpt-oss-20b
8185
path: articles/gpt-oss/run-nvidia.ipynb
@@ -85,6 +89,7 @@
8589
tags:
8690
- gpt-oss
8791
- open-models
92+
- gpt-oss-server
8893

8994
- title: Fine-tuning with gpt-oss and Hugging Face Transformers
9095
path: articles/gpt-oss/fine-tune-transfomers.ipynb
@@ -96,6 +101,7 @@
96101
tags:
97102
- open-models
98103
- gpt-oss
104+
- gpt-oss-fine-tuning
99105

100106
- title: How to handle the raw chain of thought in gpt-oss
101107
path: articles/gpt-oss/handle-raw-cot.md
@@ -105,6 +111,8 @@
105111
tags:
106112
- open-models
107113
- gpt-oss
114+
- gpt-oss-fine-tuning
115+
- gpt-oss-providers
108116

109117
- title: How to run gpt-oss with Transformers
110118
path: articles/gpt-oss/run-transformers.md
@@ -114,6 +122,7 @@
114122
tags:
115123
- open-models
116124
- gpt-oss
125+
- gpt-oss-server
117126

118127
- title: How to run gpt-oss with vLLM
119128
path: articles/gpt-oss/run-vllm.md
@@ -123,6 +132,7 @@
123132
tags:
124133
- open-models
125134
- gpt-oss
135+
- gpt-oss-server
126136

127137
- title: How to run gpt-oss locally with Ollama
128138
path: articles/gpt-oss/run-locally-ollama.md
@@ -132,6 +142,7 @@
132142
tags:
133143
- open-models
134144
- gpt-oss
145+
- gpt-oss-local
135146

136147
- title: OpenAI Harmony Response Format
137148
path: articles/openai-harmony.md
@@ -142,6 +153,8 @@
142153
- open-models
143154
- gpt-oss
144155
- harmony
156+
- gpt-oss-providers
157+
- gpt-oss-fine-tuning
145158

146159
- title: Temporal Agents with Knowledge Graphs
147160
path: examples/partners/temporal_agents_with_knowledge_graphs/temporal_agents_with_knowledge_graphs.ipynb

0 commit comments

Comments
 (0)