Skip to content

Commit 278e7a6

Browse files
committed
Merge branch 'main' of https://github.com/openai/openai-cookbook into openai-dbt-trusted-data
2 parents 2596a49 + b4283fa commit 278e7a6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+5635
-675
lines changed

AGENTS.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Repository Guidelines
2+
3+
## Project Structure & Module Organization
4+
The cookbook is organized around runnable examples and reference articles for OpenAI APIs. Place notebooks and Python scripts under `examples/<topic>/`, grouping related assets inside topic subfolders (for example, `examples/agents_sdk/`). Narrative guides and long-form docs live in `articles/`, and shared diagrams or screenshots belong in `images/`. Update `registry.yaml` whenever you add content so it appears on cookbook.openai.com, and add new author metadata in `authors.yaml` if you want custom attribution. Keep large datasets outside the repo; instead, document how to fetch them in the notebook.
5+
6+
## Build, Test, and Development Commands
7+
Use a virtual environment to isolate dependencies:
8+
- `python -m venv .venv && source .venv/bin/activate`
9+
- `pip install -r examples/<topic>/requirements.txt` (each sample lists only what it needs)
10+
- `jupyter lab` or `jupyter notebook` to develop interactively
11+
- `python .github/scripts/check_notebooks.py` to validate notebook structure before pushing
12+
13+
## Coding Style & Naming Conventions
14+
Write Python to PEP 8 with four-space indentation, descriptive variable names, and concise docstrings that explain API usage choices. Name new notebooks with lowercase, dash-or-underscore-separated phrases that match their directory—for example `examples/gpt-5/prompt-optimization-cookbook.ipynb`. Keep markdown cells focused and prefer numbered steps for multi-part workflows. Store secrets in environment variables such as `OPENAI_API_KEY`; never hard-code keys inside notebooks.
15+
16+
## Testing Guidelines
17+
Execute notebooks top-to-bottom after installing dependencies and clear lingering execution counts before committing. For Python modules or utilities, include self-check cells or lightweight `pytest` snippets and show how to run them (for example, `pytest examples/object_oriented_agentic_approach/tests`). When contributions depend on external services, mock responses or gate the cells behind clearly labeled opt-in flags.
18+
19+
## Commit & Pull Request Guidelines
20+
Use concise, imperative commit messages that describe the change scope (e.g., "Add agent portfolio collaboration demo"). Every PR should provide a summary, motivation, and self-review, and must tick the registry and authors checklist from `.github/pull_request_template.md`. Link issues when applicable and attach screenshots or output snippets for UI-heavy content. Confirm CI notebook validation passes locally before requesting review.
21+
22+
## Metadata & Publication Workflow
23+
New or relocated content must have an entry in `registry.yaml` with an accurate path, date, and tag set so the static site generator includes it. When collaborating, coordinate author slugs in `authors.yaml` to avoid duplicates, and run `python -m yaml lint registry.yaml` (or your preferred YAML linter) to catch syntax errors before submitting.

authors.yaml

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,22 @@
22

33
# You can optionally customize how your information shows up cookbook.openai.com over here.
44
# If your information is not present here, it will be pulled from your GitHub profile.
5+
56
b-per:
67
name: "Benoit Perigaud"
78
website: "https://www.linkedin.com/in/benoit-perigaud/"
89
avatar: "https://avatars.githubusercontent.com/u/8754100?v=4"
910

11+
daveleo-openai:
12+
name: "Dave Leo"
13+
website: "https://www.linkedin.com/in/davidanthonyleo/"
14+
avatar: "https://media.licdn.com/dms/image/v2/C5603AQF2Kg-D7XJKNw/profile-displayphoto-shrink_800_800/profile-displayphoto-shrink_800_800/0/1612654752234?e=1761782400&v=beta&t=RkO9jCbJrY6Ox9YRbMA6HAAZhxfYJV1OsZeIT3YatBM"
15+
16+
jonlim-openai:
17+
name: "Jonathan Lim"
18+
website: "https://www.linkedin.com/in/jonlmr"
19+
avatar: "https://avatars.githubusercontent.com/u/189068472?v=4"
20+
1021
WJPBProjects:
1122
name: "Wulfie Bain"
1223
website: "https://www.linkedin.com/in/wulfie-bain/"
@@ -37,6 +48,11 @@ rajpathak-openai:
3748
website: "https://www.linkedin.com/in/rajpathakopenai/"
3849
avatar: "https://avatars.githubusercontent.com/u/208723614?s=400&u=c852eed3be082f7fbd402b5a45e9b89a0bfed1b8&v=4"
3950

51+
emreokcular:
52+
name: "Emre Okcular"
53+
website: "https://www.linkedin.com/in/emreokcular/"
54+
avatar: "https://avatars.githubusercontent.com/u/26163154?v=4"
55+
4056
chelseahu-openai:
4157
name: "Chelsea Hu"
4258
website: "https://www.linkedin.com/in/chelsea-tsaiszuhu/"
@@ -52,6 +68,16 @@ theophile-oai:
5268
website: "https://www.linkedin.com/in/theophilesautory"
5369
avatar: "https://avatars.githubusercontent.com/u/206768658?v=4"
5470

71+
bfioca-openai:
72+
name: "Brian Fioca"
73+
website: "https://www.linkedin.com/in/brian-fioca/"
74+
avatar: "https://avatars.githubusercontent.com/u/206814564?v=4"
75+
76+
carter-oai:
77+
name: "Carter Mcclellan"
78+
website: "https://www.linkedin.com/in/carter-mcclellan/"
79+
avatar: "https://avatars.githubusercontent.com/u/219906258?v=4"
80+
5581
robert-tinn:
5682
name: "Robert Tinn"
5783
website: "https://www.linkedin.com/in/robert-tinn/"
@@ -471,3 +497,11 @@ heejingithub:
471497
name: "Heejin Cho"
472498
website: "https://www.linkedin.com/in/heejc/"
473499
avatar: "https://avatars.githubusercontent.com/u/169293861"
500+
501+
502+
himadri:
503+
name: "Himadri Acharya"
504+
website: "https://www.linkedin.com/in/himadri-acharya-086ba261/"
505+
avatar: "https://avatars.githubusercontent.com/u/14100684?v=4"
506+
507+

examples/Context_summarization_with_realtime_api.ipynb

Lines changed: 20 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
"\n",
3131
"\n",
3232
"*Notes:*\n",
33-
"> 1. GPT-4o-Realtime supports a 128k token context window, though in certain use cases, you may notice performance degrade as you stuff more tokens into the context window.\n",
33+
"> 1. gpt-realtime supports a 32k token context window, though in certain use cases, you may notice performance degrade as you stuff more tokens into the context window.\n",
3434
"> 2. Token window = all tokens (words and audio tokens) the model currently keeps in memory for the session.x\n",
3535
"\n",
3636
"### One‑liner install (run in a fresh cell)"
@@ -48,7 +48,7 @@
4848
},
4949
{
5050
"cell_type": "code",
51-
"execution_count": 4,
51+
"execution_count": 1,
5252
"metadata": {},
5353
"outputs": [],
5454
"source": [
@@ -74,7 +74,7 @@
7474
},
7575
{
7676
"cell_type": "code",
77-
"execution_count": 5,
77+
"execution_count": 2,
7878
"metadata": {},
7979
"outputs": [],
8080
"source": [
@@ -96,7 +96,7 @@
9696
"In practice you’ll often see **≈ 10 ×** more tokens for the *same* sentence in audio versus text.\n",
9797
"\n",
9898
"\n",
99-
"* GPT-4o realtime accepts up to **128k tokens** and as the token size increases, instruction adherence can drift.\n",
99+
"* gpt-realtime accepts up to **32k tokens** and as the token size increases, instruction adherence can drift.\n",
100100
"* Every user/assistant turn consumes tokens → the window **only grows**.\n",
101101
"* **Strategy**: Summarise older turns into a single assistant message, keep the last few verbatim turns, and continue.\n",
102102
"\n",
@@ -128,7 +128,7 @@
128128
},
129129
{
130130
"cell_type": "code",
131-
"execution_count": 6,
131+
"execution_count": 3,
132132
"metadata": {},
133133
"outputs": [],
134134
"source": [
@@ -159,7 +159,7 @@
159159
},
160160
{
161161
"cell_type": "code",
162-
"execution_count": 7,
162+
"execution_count": 4,
163163
"metadata": {},
164164
"outputs": [],
165165
"source": [
@@ -196,7 +196,7 @@
196196
},
197197
{
198198
"cell_type": "code",
199-
"execution_count": 8,
199+
"execution_count": 5,
200200
"metadata": {},
201201
"outputs": [],
202202
"source": [
@@ -248,7 +248,7 @@
248248
},
249249
{
250250
"cell_type": "code",
251-
"execution_count": 9,
251+
"execution_count": 6,
252252
"metadata": {},
253253
"outputs": [],
254254
"source": [
@@ -297,11 +297,11 @@
297297
"metadata": {},
298298
"source": [
299299
"### 3.3 Detect When to Summarise\n",
300-
"The Realtime model keeps a **large 128 k‑token window**, but quality can drift long before that limit as you stuff more context into the model.\n",
300+
"The Realtime model keeps a **large 32 k‑token window**, but quality can drift long before that limit as you stuff more context into the model.\n",
301301
"\n",
302302
"Our goal: **auto‑summarise** once the running window nears a safe threshold (default **2 000 tokens** for the notebook), then prune the superseded turns both locally *and* server‑side.\n",
303303
"\n",
304-
"We monitor latest_tokens returned in `response.done`. When it exceeds SUMMARY_TRIGGER and we have more than KEEP_LAST_TURNS, we spin up a background summarisation coroutine.\n",
304+
"We monitor latest_tokens returned in `response.done`. When it exceeds SUMMARY_TRIGGER and we have more than KEEP_LAST_TURNS, we spin up a background summarization coroutine.\n",
305305
"\n",
306306
"We compress everything except the last 2 turns into a single French paragraph, then:\n",
307307
"\n",
@@ -314,7 +314,7 @@
314314
},
315315
{
316316
"cell_type": "code",
317-
"execution_count": 10,
317+
"execution_count": 7,
318318
"metadata": {},
319319
"outputs": [],
320320
"source": [
@@ -343,7 +343,7 @@
343343
},
344344
{
345345
"cell_type": "code",
346-
"execution_count": 11,
346+
"execution_count": 8,
347347
"metadata": {},
348348
"outputs": [],
349349
"source": [
@@ -401,7 +401,7 @@
401401
},
402402
{
403403
"cell_type": "code",
404-
"execution_count": 12,
404+
"execution_count": 9,
405405
"metadata": {},
406406
"outputs": [],
407407
"source": [
@@ -451,7 +451,7 @@
451451
},
452452
{
453453
"cell_type": "code",
454-
"execution_count": 13,
454+
"execution_count": 10,
455455
"metadata": {},
456456
"outputs": [],
457457
"source": [
@@ -466,14 +466,14 @@
466466
},
467467
{
468468
"cell_type": "code",
469-
"execution_count": 14,
469+
"execution_count": 11,
470470
"metadata": {},
471471
"outputs": [],
472472
"source": [
473473
"# --------------------------------------------------------------------------- #\n",
474-
"# 🎤 Realtime session #\n",
474+
"# Realtime session #\n",
475475
"# --------------------------------------------------------------------------- #\n",
476-
"async def realtime_session(model=\"gpt-4o-realtime-preview\", voice=\"shimmer\", enable_playback=True):\n",
476+
"async def realtime_session(model=\"gpt-realtime\", voice=\"shimmer\", enable_playback=True):\n",
477477
" \"\"\"\n",
478478
" Main coroutine: connects to the Realtime endpoint, spawns helper tasks,\n",
479479
" and processes incoming events in a big async‑for loop.\n",
@@ -487,7 +487,7 @@
487487
" # Open the WebSocket connection to the Realtime API #\n",
488488
" # ----------------------------------------------------------------------- #\n",
489489
" url = f\"wss://api.openai.com/v1/realtime?model={model}\"\n",
490-
" headers = {\"Authorization\": f\"Bearer {openai.api_key}\", \"OpenAI-Beta\": \"realtime=v1\"}\n",
490+
" headers = {\"Authorization\": f\"Bearer {openai.api_key}\"}\n",
491491
"\n",
492492
" async with websockets.connect(url, extra_headers=headers, max_size=1 << 24) as ws:\n",
493493
" # ------------------------------------------------------------------- #\n",
@@ -503,6 +503,8 @@
503503
" await ws.send(json.dumps({\n",
504504
" \"type\": \"session.update\",\n",
505505
" \"session\": {\n",
506+
" \"type\": \"realtime\",\n",
507+
" model: \"gpt-realtime\",\n",
506508
" \"voice\": voice,\n",
507509
" \"modalities\": [\"audio\", \"text\"],\n",
508510
" \"input_audio_format\": \"pcm16\",\n",

examples/Realtime_prompting_guide.ipynb

Lines changed: 1906 additions & 0 deletions
Large diffs are not rendered by default.

examples/Reinforcement_Fine_Tuning.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2171,7 +2171,7 @@
21712171
],
21722172
"metadata": {
21732173
"kernelspec": {
2174-
"display_name": "jupyter-env",
2174+
"display_name": "openai",
21752175
"language": "python",
21762176
"name": "python3"
21772177
},
@@ -2185,7 +2185,7 @@
21852185
"name": "python",
21862186
"nbconvert_exporter": "python",
21872187
"pygments_lexer": "ipython3",
2188-
"version": "3.12.9"
2188+
"version": "3.11.8"
21892189
}
21902190
},
21912191
"nbformat": 4,

0 commit comments

Comments
 (0)