Skip to content

Commit a05c098

Browse files
committed
Merge main and update registry
2 parents 1a1f046 + 2d8d01e commit a05c098

26 files changed

+2463
-862
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -145,3 +145,4 @@ examples/multimodal/.local_cache/*
145145

146146
# VS Code files
147147
.vscode/
148+
.cursorignore

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,12 @@
99

1010
> ✨ Navigate at [cookbook.openai.com](https://cookbook.openai.com)
1111
12-
Example code and guides for accomplishing common tasks with the [OpenAI API](https://platform.openai.com/docs/introduction). To run these examples, you'll need an OpenAI account and associated API key ([create a free account here](https://beta.openai.com/signup)). Set an environment variable called `OPENAI_API_KEY` with your API key. Alternatively, in most IDEs such as Visual Studio Code, you can create an `.env` file at the root of your repo containing `OPENAI_API_KEY=<your API key>`, which will be picked up by the notebooks.
12+
Example code and guides for accomplishing common tasks with the [OpenAI API](https://platform.openai.com/docs/introduction). To run these examples, you'll need an OpenAI account and associated API key ([create a free account here](https://platform.openai.com/signup)). Set an environment variable called `OPENAI_API_KEY` with your API key. Alternatively, in most IDEs such as Visual Studio Code, you can create an `.env` file at the root of your repo containing `OPENAI_API_KEY=<your API key>`, which will be picked up by the notebooks.
1313

1414
Most code examples are written in Python, though the concepts can be applied in any language.
1515

1616
For other useful tools, guides and courses, check out these [related resources from around the web](https://cookbook.openai.com/related_resources).
1717

1818
## License
1919

20-
MIT
20+
MIT License

authors.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,11 @@ ibigio:
6868
website: "https://twitter.com/ilanbigio"
6969
avatar: "https://pbs.twimg.com/profile_images/1841544725654077440/DR3b8DMr_400x400.jpg"
7070

71+
willhath-openai:
72+
name: "Will Hathaway"
73+
website: "https://www.willhath.com"
74+
avatar: "https://media.licdn.com/dms/image/v2/D4E03AQEHOtMrHtww4Q/profile-displayphoto-shrink_200_200/B4EZRR64p9HgAc-/0/1736541178829?e=2147483647&v=beta&t=w1rX0KhLZaK5qBkVLkJjmYmfNMbsV2Bcn8InFVX9lwI"
75+
7176
jhills20:
7277
name: "James Hills"
7378
website: "https://twitter.com/jamesmhills"

examples/Context_summarization_with_realtime_api.ipynb

Lines changed: 724 additions & 0 deletions
Large diffs are not rendered by default.

examples/How_to_call_functions_for_knowledge_retrieval.ipynb

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,6 @@
5454
"metadata": {},
5555
"outputs": [],
5656
"source": [
57-
"import os\n",
5857
"import arxiv\n",
5958
"import ast\n",
6059
"import concurrent\n",

examples/How_to_combine_GPT4o_with_RAG_Outfit_Assistant.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -331,16 +331,16 @@
331331
" \"content\": [\n",
332332
" {\n",
333333
" \"type\": \"text\",\n",
334-
" \"text\": \"\"\"Given an image of an item of clothing, analyze the item and generate a JSON output with the following fields: \"items\", \"category\", and \"gender\". \n",
334+
" \"text\": f\"\"\"Given an image of an item of clothing, analyze the item and generate a JSON output with the following fields: \"items\", \"category\", and \"gender\".\n",
335335
" Use your understanding of fashion trends, styles, and gender preferences to provide accurate and relevant suggestions for how to complete the outfit.\n",
336336
" The items field should be a list of items that would go well with the item in the picture. Each item should represent a title of an item of clothing that contains the style, color, and gender of the item.\n",
337337
" The category needs to be chosen between the types in this list: {subcategories}.\n",
338338
" You have to choose between the genders in this list: [Men, Women, Boys, Girls, Unisex]\n",
339339
" Do not include the description of the item in the picture. Do not include the ```json ``` tag in the output.\n",
340-
" \n",
340+
"\n",
341341
" Example Input: An image representing a black leather jacket.\n",
342342
"\n",
343-
" Example Output: {\"items\": [\"Fitted White Women's T-shirt\", \"White Canvas Sneakers\", \"Women's Black Skinny Jeans\"], \"category\": \"Jackets\", \"gender\": \"Women\"}\n",
343+
" Example Output: {{\"items\": [\"Fitted White Women's T-shirt\", \"White Canvas Sneakers\", \"Women's Black Skinny Jeans\"], \"category\": \"Jackets\", \"gender\": \"Women\"}}\n",
344344
" \"\"\",\n",
345345
" },\n",
346346
" {\n",

examples/Speech_transcription_methods.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@
142142
"### How it works\n",
143143
"\n",
144144
"\n",
145-
"![STT Not Streaming Transcription flow](./imgs/speech-to-text-not-streaming.png)\n",
145+
"![STT Not Streaming Transcription flow](../images/speech-to-text-not-streaming.png)\n",
146146
"\n",
147147
"#### Benefits\n",
148148
"\n",
@@ -250,7 +250,7 @@
250250
"- You need immediate transcription results (partial or final) as they arrive. \n",
251251
"- Scenarios where partial feedback improves UX, e.g., uploading a long voice memo.\n",
252252
"\n",
253-
"![STT Streaming Transcription flow](./imgs/speech-to-text-streaming.png)\n",
253+
"![STT Streaming Transcription flow](../images/speech-to-text-streaming.png)\n",
254254
"\n",
255255
"#### Benefits\n",
256256
"- **Real-time feel:** Users see transcription updates almost immediately. \n",
@@ -321,7 +321,7 @@
321321
"source": [
322322
"### How it works\n",
323323
"\n",
324-
"![Realtime Transcription flow](./imgs/realtime_api_transcription.png)\n",
324+
"![Realtime Transcription flow](../images/realtime_api_transcription.png)\n",
325325
"\n",
326326
"#### Benefits\n",
327327
"- **Ultra-low latency:** Typically 300–800 ms, enabling near-instant transcription. \n",
@@ -496,7 +496,7 @@
496496
"source": [
497497
"### How it works\n",
498498
"\n",
499-
"![Agents Transcription flow](./imgs/agents_sdk_transcription.png)\n",
499+
"![Agents Transcription flow](../images/agents_sdk_transcription.png)\n",
500500
"\n",
501501
"**Benefits**\n",
502502
"\n",

examples/Whisper_prompting_guide.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -413,7 +413,7 @@
413413
],
414414
"source": [
415415
"# more natural, sentence-style prompt\n",
416-
"transcribe(bbq_plans_filepath, prompt=\"\"\"\"Aimee and Shawn ate whisky, doughnuts, omelets at a BBQ.\"\"\")"
416+
"transcribe(bbq_plans_filepath, prompt=\"\"\"Aimee and Shawn ate whisky, doughnuts, omelets at a BBQ.\"\"\")"
417417
]
418418
},
419419
{

examples/chatgpt/gpt_actions_library/gpt_action_snowflake_direct.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
"source": [
2828
"This particular GPT Action provides an overview of how to connect to a Snowflake Data Warehouse. This Action takes a user’s question, scans the relevant tables to gather the data schema, then writes a SQL query to answer the user’s question.\n",
2929
"\n",
30-
"Note: This cookbook returns back a [ResultSet SQL statement](https://docs.snowflake.com/en/developer-guide/sql-api/handling-responses#getting-the-data-from-the-results), rather than the full result that is not limited by GPT Actions application/json payload limit. For production and advanced use-case, a middleware is required to return back a CSV file. You can follow instructions in the [GPT Actions - Snowflake Middleware cookbook](../gpt_action_snowflake_middleware) to implement this flow instead."
30+
"Note: This cookbook returns back a [ResultSet SQL statement](https://docs.snowflake.com/en/developer-guide/sql-api/handling-responses#getting-the-data-from-the-results), rather than the full result that is not limited by GPT Actions application/json payload limit. For production and advanced use-case, a middleware is required to return back a CSV file. You can follow instructions in the [GPT Actions - Snowflake Middleware cookbook](../gpt_actions_library/gpt_action_snowflake_middleware) to implement this flow instead."
3131
]
3232
},
3333
{

0 commit comments

Comments
 (0)