Skip to content

Commit 35a0b53

Browse files
committed
wording changes
1 parent 4af14a7 commit 35a0b53

File tree

2 files changed

+13
-15
lines changed

2 files changed

+13
-15
lines changed

examples/gpt-5/prompt-optimization-cookbook/prompt-optimization-cookbook.ipynb

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313
"id": "a3942231",
1414
"metadata": {},
1515
"source": [
16-
"The GPT-5 Family of models are the smartest models we’ve released to date, representing a step change in the models’ capabilities specializing in agentic task performance, coding, and steerability, making it a great fit for everyone from curious users to advanced researchers. \n",
16+
"The GPT-5 Family of models are the smartest models we’ve released to date, representing a step change in the models’ capabilities across the board. GPT-5 is particularly specialized in agentic task performance, coding, and steerability, making it a great fit for everyone from curious users to advanced researchers. \n",
1717
"\n",
18-
"GPT-5 will benefit from all the traditional prompting best practices, and to help get you build the best prompt we are introducing a [Prompting Guide for GPT-5](#https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide) that explains the best ways to construct a prompt for GPT-5 to make the most of its state-of-the-art capabilities. Alongside that, we are we are introducing a [GPT-5 Specific Prompt Optimizer](#https://platform.openai.com/chat/edit?optimize=true) in our Playground to help users get started on **improving existing prompts** and **migrating prompts** for GPT-5 and other OpenAI models.\n",
18+
"GPT-5 will benefit from all the traditional prompting best practices, and to help you construct the best prompt we are introducing a [Prompting Guide for GPT-5](#) explaining how to make the most of its state-of-the-art capabilities. Alongside that, we are introducing a [GPT-5 Specific Prompt Optimizer](#https://platform.openai.com/chat/edit?optimize=true) in our Playground to help users get started on **improving existing prompts** and **migrating prompts** for GPT-5 and other OpenAI models.\n",
1919
"\n",
20-
"In this cookbook we will go through how you can get spun up quickly to solve your task with GPT-5. We will share results of significant improvements on evaluations and common tasks and walk you through how you can use the Prompt Optimizer to do the same.\n"
20+
"In this cookbook we will go through how you can get spun up quickly to solve your task with GPT-5. We will share results of measurable improvements on common tasks and walk you through how you can use the Prompt Optimizer to do the same.\n"
2121
]
2222
},
2323
{
@@ -27,13 +27,13 @@
2727
"source": [
2828
"## Migrating and Optimizing Prompts\n",
2929
"\n",
30-
"Crafting effective prompts is a critical skill when working with LLMs. The goal of the Prompt Optimizer is to give your prompt the target model best practices and formatting most effective for our models. The Optimizer also removes common prompting failure modes such as:\n",
30+
"Crafting effective prompts is a critical skill when working with LLMs. The goal of the Prompt Optimizer is to give your prompt the target model best practices and formatting most effective for our models. The Optimizer also removes common prompting failure modes such as: \n",
3131
"\n",
32-
"- Contradictions in the prompt instructions\n",
33-
"- Missing or unclear format specifications\n",
34-
"- Inconsistencies between the prompt and few-shot examples\n",
32+
"Contradictions in the prompt instructions \n",
33+
"\tMissing or unclear format specifications \n",
34+
"\tInconsistencies between the prompt and few-shot examples \n",
3535
"\n",
36-
"Along with tuning the prompt for the target model, the Optimizer is cognizant the specific tasks your are trying to accomplish and can apply crucial practices that we see in Agentic Workflows, Coding and Multi-Modality. Let's walk through some before-and-afters for some common examples where prompt optimization shines. \n",
36+
"Along with tuning the prompt for the target model, the Optimizer is cognizant of the specific task you are trying to accomplish and can apply crucial practices to boost performance in Agentic Workflows, Coding and Multi-Modality. Let's walk through some before-and-afters to see where prompt optimization shines. \n",
3737
"\n",
3838
"> [!NOTE]\n",
3939
"> Remember that prompting is not a one-size-fits-all experience, so we recommend running thorough experiments and iterating to find the best solution for your problem."
@@ -89,7 +89,7 @@
8989
"\n",
9090
"### Coding and Analytics: Streaming Top‑K Frequent Words \n",
9191
"\n",
92-
"We start with a task in the well-known field of Coding and Analytics. We will ask the model to generate a Python script that computes the exact Top‑K most frequent tokens from a large text stream using a specific tokenization spec. Tasks like these are sensitive to poor prompting, as they can push the model toward the wrong algorithms and approaches (approximate sketches vs multi‑pass/disk‑backed exact solutions), dramatically changing accuracy and runtime.\n",
92+
"We start with a task in a field that model has seen significant improvements: Coding and Analytics. We will ask the model to generate a Python script that computes the exact Top‑K most frequent tokens from a large text stream using a specific tokenization spec. Tasks like these are sensitive to poor prompting, as they can push the model toward the wrong algorithms and approaches (approximate sketches vs multi‑pass/disk‑backed exact solutions), dramatically changing accuracy and runtime.\n",
9393
"\n",
9494
"For this task, we will evaluate:\n",
9595
"1. Compilation/Execution success over 30 runs\n",
@@ -106,7 +106,7 @@
106106
"metadata": {},
107107
"source": [
108108
"### Our Baseline Prompt\n",
109-
"For our example, let's use a prompt with common mistakes many people make: **adding contradictions to their prompt**, and **providing ambigous or minimal instructions**. Contradictions in instructions often reduce performance and increase latency, especially in reasoning models like GPT-5, and ambigous instructions can cause unwanted behaviours. "
109+
"For our example, let's look at a typical starting prompt with some minor **contradictions in the prompt**, and **ambigous or underspecified instructions**. Contradictions in instructions often reduce performance and increase latency, especially in reasoning models like GPT-5, and ambigous instructions can cause unwanted behaviours. "
110110
]
111111
},
112112
{
@@ -896,7 +896,7 @@
896896
"id": "ebd5453b",
897897
"metadata": {},
898898
"source": [
899-
"## Conculsion\n",
899+
"## Conclusion\n",
900900
"\n",
901901
"We’re excited for everyone to try **Prompt Optimization for GPT-5** in the OpenAI Playground. GPT-5 brings state-of-the-art intelligence, and a strong prompt helps it reason more reliably, follow constraints, and produce cleaner, higher quality results.\n",
902902
"\n",

registry.yaml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
# should build pages for, and indicates metadata such as tags, creation date and
55
# authors for each page.
66

7-
- title: GPT-5 Prompt Migration and Improvement using the new prompt optimizer
7+
- title: GPT-5 Prompt Migration and Improvement Using the New Optimizer
88
path: examples/gpt-5/prompt-optimization-cookbook/prompt-optimization-cookbook.ipynb
99
date: 2025-08-07
1010
authors:
@@ -39,7 +39,7 @@
3939
- gpt-5
4040
- responses
4141
- reasoning
42-
42+
4343
- title: GPT-5 New Params and Tools
4444
path: examples/gpt-5/gpt-5_new_params_and_tools.ipynb
4545
date: 2025-08-07
@@ -69,7 +69,6 @@
6969
- gpt-oss
7070
- open-models
7171

72-
7372
- title: Fine-tuning with gpt-oss and Hugging Face Transformers
7473
path: articles/gpt-oss/fine-tune-transfomers.ipynb
7574
date: 2025-08-05
@@ -127,7 +126,6 @@
127126
- gpt-oss
128127
- harmony
129128

130-
131129
- title: Temporal Agents with Knowledge Graphs
132130
path: examples/partners/temporal_agents_with_knowledge_graphs/temporal_agents_with_knowledge_graphs.ipynb
133131
date: 2025-07-22

0 commit comments

Comments
 (0)