You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -202,6 +193,8 @@ Once its done generating it will stop with either a `<|return|>` token indicatin
202
193
203
194
The `final` channel will contain the answer to your user’s request. Check out the [reasoning section](#reasoning) for more details on the chain-of-thought.
204
195
196
+
**Implementation note:**`<|return|>` is a decode-time stop token only. When you add the assistant’s generated reply to conversation history for the next turn, replace the trailing `<|return|>` with `<|end|>` so that stored messages are fully formed as `<|start|>{header}<|message|>{content}<|end|>`. Prior messages in prompts should therefore end with `<|end|>`. For supervised targets/training examples, ending with `<|return|>` is appropriate; for persisted history, normalize to `<|end|>`.
197
+
205
198
### System message format
206
199
207
200
The system message is used to provide general information to the system. This is different to what might be considered the “system prompt” in other prompt formats. For that, check out the [developer message format](#developer-message-format).
@@ -383,7 +376,7 @@ If the model decides to call a tool it will define a `recipient` in the header o
383
376
The model might also specify a `<|constrain|>` token to indicate the type of input for the tool call. In this case since it’s being passed in as JSON the `<|constrain|>` is set to `json`.
384
377
385
378
```
386
-
<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
379
+
<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
387
380
```
388
381
389
382
#### Handling tool calls
@@ -399,7 +392,7 @@ A tool message has the following format:
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
435
+
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_current_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
443
436
```
444
437
445
-
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
438
+
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_current_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
"Newer models such as gpt-4o or gpt-3.5-turbo can call multiple functions in one turn."
518
+
"Newer models such as gpt-5, gpt-4.1 or gpt-4o can call multiple functions in one turn."
519
519
]
520
520
},
521
521
{
@@ -758,7 +758,7 @@
758
758
"source": [
759
759
"##### Steps to invoke a function call using Chat Completions API: \n",
760
760
"\n",
761
-
"**Step 1**: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function names and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.<br>\n",
761
+
"**Step 1**: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function name and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.<br>\n",
762
762
"\n",
763
763
"**Step 2**: Check programmatically if model wanted to call a function. If true, proceed to step 3. <br> \n",
764
764
"**Step 3**: Extract the function name and parameters from response, call the function with parameters. Append the result to messages. <br> \n",
@@ -767,7 +767,7 @@
767
767
},
768
768
{
769
769
"cell_type": "code",
770
-
"execution_count": 19,
770
+
"execution_count": null,
771
771
"id": "e8b7cb9cdc7a7616",
772
772
"metadata": {
773
773
"ExecuteTime": {
@@ -792,9 +792,9 @@
792
792
"}]\n",
793
793
"\n",
794
794
"response = client.chat.completions.create(\n",
795
-
" model='gpt-4o', \n",
795
+
" model=GPT_MODEL, \n",
796
796
" messages=messages, \n",
797
-
" tools=tools, \n",
797
+
" tools=tools, \n",
798
798
" tool_choice=\"auto\"\n",
799
799
")\n",
800
800
"\n",
@@ -807,7 +807,7 @@
807
807
},
808
808
{
809
809
"cell_type": "code",
810
-
"execution_count": 20,
810
+
"execution_count": null,
811
811
"id": "351c39def3417776",
812
812
"metadata": {
813
813
"ExecuteTime": {
@@ -847,7 +847,7 @@
847
847
" # Step 4: Invoke the chat completions API with the function response appended to the messages list\n",
848
848
" # Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'\n",
Copy file name to clipboardExpand all lines: examples/codex/jira-github.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@
14
14
"\n",
15
15
"This cookbook provides a practical, step-by-step approach to automating the workflow between Jira and GitHub. By labeling a Jira issue, you trigger an end-to-end process that creates a **GitHub pull request**, keeps both systems updated, and streamlines code review, all with minimal manual effort. The automation is powered by the [`codex-cli`](https://github.com/openai/openai-codex) agent running inside a GitHub Action.\n",
0 commit comments