Skip to content

Commit de0fece

Browse files
authored
Merge branch 'main' into main
2 parents 3282beb + 30fba98 commit de0fece

File tree

3 files changed

+30
-17
lines changed

3 files changed

+30
-17
lines changed

articles/openai-harmony.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The [`gpt-oss` models](https://openai.com/open-models) were trained on the harmo
66

77
### Roles
88

9-
Every message that the model processes has a role associated with it. The model knows about three types of roles:
9+
Every message that the model processes has a role associated with it. The model knows about five types of roles:
1010

1111
| Role | Purpose |
1212
| :---------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -91,14 +91,14 @@ convo = Conversation.from_messages(
9191
Message.from_role_and_content(Role.USER, "What is the weather in Tokyo?"),
9292
Message.from_role_and_content(
9393
Role.ASSISTANT,
94-
'User asks: "What is the weather in Tokyo?" We need to use get_weather tool.',
94+
'User asks: "What is the weather in Tokyo?" We need to use get_current_weather tool.',
9595
).with_channel("analysis"),
9696
Message.from_role_and_content(Role.ASSISTANT, '{"location": "Tokyo"}')
9797
.with_channel("commentary")
98-
.with_recipient("functions.get_weather")
98+
.with_recipient("functions.get_current_weather")
9999
.with_content_type("<|constrain|> json"),
100100
Message.from_author_and_content(
101-
Author.new(Role.TOOL, "functions.lookup_weather"),
101+
Author.new(Role.TOOL, "functions.get_current_weather"),
102102
'{ "temperature": 20, "sunny": true }',
103103
).with_channel("commentary"),
104104
]
@@ -376,7 +376,7 @@ If the model decides to call a tool it will define a `recipient` in the header o
376376
The model might also specify a `<|constrain|>` token to indicate the type of input for the tool call. In this case since it’s being passed in as JSON the `<|constrain|>` is set to `json`.
377377

378378
```
379-
<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
379+
<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
380380
```
381381

382382
#### Handling tool calls
@@ -392,7 +392,7 @@ A tool message has the following format:
392392
So in our example above
393393

394394
```
395-
<|start|>functions.get_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|>
395+
<|start|>functions.get_current_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|>
396396
```
397397

398398
Once you have gathered the output for the tool calls you can run inference with the complete content:
@@ -432,10 +432,10 @@ locations: string[],
432432
format?: "celsius" | "fahrenheit", // default: celsius
433433
}) => any;
434434
435-
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
435+
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_current_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
436436
```
437437

438-
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
438+
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_current_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
439439

440440
#### Preambles
441441

examples/How_to_call_functions_with_chat_models.ipynb

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "code",
54-
"execution_count": 2,
54+
"execution_count": null,
5555
"id": "dab872c5",
5656
"metadata": {
5757
"ExecuteTime": {
@@ -66,7 +66,7 @@
6666
"from tenacity import retry, wait_random_exponential, stop_after_attempt\n",
6767
"from termcolor import colored \n",
6868
"\n",
69-
"GPT_MODEL = \"gpt-4o\"\n",
69+
"GPT_MODEL = \"gpt-5\"\n",
7070
"client = OpenAI()"
7171
]
7272
},
@@ -515,7 +515,7 @@
515515
"source": [
516516
"### Parallel Function Calling\n",
517517
"\n",
518-
"Newer models such as gpt-4o or gpt-3.5-turbo can call multiple functions in one turn."
518+
"Newer models such as gpt-5, gpt-4.1 or gpt-4o can call multiple functions in one turn."
519519
]
520520
},
521521
{
@@ -758,7 +758,7 @@
758758
"source": [
759759
"##### Steps to invoke a function call using Chat Completions API: \n",
760760
"\n",
761-
"**Step 1**: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function names and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.<br>\n",
761+
"**Step 1**: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function name and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.<br>\n",
762762
" \n",
763763
"**Step 2**: Check programmatically if model wanted to call a function. If true, proceed to step 3. <br> \n",
764764
"**Step 3**: Extract the function name and parameters from response, call the function with parameters. Append the result to messages. <br> \n",
@@ -767,7 +767,7 @@
767767
},
768768
{
769769
"cell_type": "code",
770-
"execution_count": 19,
770+
"execution_count": null,
771771
"id": "e8b7cb9cdc7a7616",
772772
"metadata": {
773773
"ExecuteTime": {
@@ -792,9 +792,9 @@
792792
"}]\n",
793793
"\n",
794794
"response = client.chat.completions.create(\n",
795-
" model='gpt-4o', \n",
795+
" model=GPT_MODEL, \n",
796796
" messages=messages, \n",
797-
" tools= tools, \n",
797+
" tools=tools, \n",
798798
" tool_choice=\"auto\"\n",
799799
")\n",
800800
"\n",
@@ -807,7 +807,7 @@
807807
},
808808
{
809809
"cell_type": "code",
810-
"execution_count": 20,
810+
"execution_count": null,
811811
"id": "351c39def3417776",
812812
"metadata": {
813813
"ExecuteTime": {
@@ -847,7 +847,7 @@
847847
" # Step 4: Invoke the chat completions API with the function response appended to the messages list\n",
848848
" # Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'\n",
849849
" model_response_with_function_call = client.chat.completions.create(\n",
850-
" model=\"gpt-4o\",\n",
850+
" model=GPT_MODEL,\n",
851851
" messages=messages,\n",
852852
" ) # get a new response from the model where it can see the function response\n",
853853
" print(model_response_with_function_call.choices[0].message.content)\n",

registry.yaml

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
tags:
1313
- gpt-oss
1414
- open-models
15+
- gpt-oss-providers
1516

1617
- title: How to run gpt-oss locally with LM Studio
1718
path: articles/gpt-oss/run-locally-lmstudio.md
@@ -21,6 +22,8 @@
2122
tags:
2223
- gpt-oss
2324
- open-models
25+
- gpt-oss-local
26+
2427
- title: GPT-5 Prompt Migration and Improvement Using the New Optimizer
2528
path: examples/gpt-5/prompt-optimization-cookbook.ipynb
2629
date: 2025-08-07
@@ -76,6 +79,7 @@
7679
tags:
7780
- gpt-oss
7881
- open-models
82+
- gpt-oss-server
7983

8084
- title: Using NVIDIA TensorRT-LLM to run gpt-oss-20b
8185
path: articles/gpt-oss/run-nvidia.ipynb
@@ -85,6 +89,7 @@
8589
tags:
8690
- gpt-oss
8791
- open-models
92+
- gpt-oss-server
8893

8994
- title: Fine-tuning with gpt-oss and Hugging Face Transformers
9095
path: articles/gpt-oss/fine-tune-transfomers.ipynb
@@ -96,6 +101,7 @@
96101
tags:
97102
- open-models
98103
- gpt-oss
104+
- gpt-oss-fine-tuning
99105

100106
- title: How to handle the raw chain of thought in gpt-oss
101107
path: articles/gpt-oss/handle-raw-cot.md
@@ -105,6 +111,8 @@
105111
tags:
106112
- open-models
107113
- gpt-oss
114+
- gpt-oss-fine-tuning
115+
- gpt-oss-providers
108116

109117
- title: How to run gpt-oss with Transformers
110118
path: articles/gpt-oss/run-transformers.md
@@ -114,6 +122,7 @@
114122
tags:
115123
- open-models
116124
- gpt-oss
125+
- gpt-oss-server
117126

118127
- title: How to run gpt-oss with vLLM
119128
path: articles/gpt-oss/run-vllm.md
@@ -123,6 +132,7 @@
123132
tags:
124133
- open-models
125134
- gpt-oss
135+
- gpt-oss-server
126136

127137
- title: How to run gpt-oss locally with Ollama
128138
path: articles/gpt-oss/run-locally-ollama.md
@@ -132,6 +142,7 @@
132142
tags:
133143
- open-models
134144
- gpt-oss
145+
- gpt-oss-local
135146

136147
- title: OpenAI Harmony Response Format
137148
path: articles/openai-harmony.md
@@ -142,6 +153,8 @@
142153
- open-models
143154
- gpt-oss
144155
- harmony
156+
- gpt-oss-providers
157+
- gpt-oss-fine-tuning
145158

146159
- title: Temporal Agents with Knowledge Graphs
147160
path: examples/partners/temporal_agents_with_knowledge_graphs/temporal_agents_with_knowledge_graphs.ipynb

0 commit comments

Comments
 (0)