Skip to content

Commit 12498f9

Browse files
committed
Update to use model grader
1 parent 97aaf22 commit 12498f9

File tree

1 file changed

+45
-10
lines changed

1 file changed

+45
-10
lines changed

examples/evaluation/use-cases/EvalsAPI_Audio_Inputs.ipynb

Lines changed: 45 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66
"source": [
77
"# Evals API: Audio Inputs\n",
88
"\n",
9-
"This cookbook demonstrates how to use OpenAI's Evals framework for audio-based tasks. Leveraging the Evals API, we will grade model-generated responses to an audio message and prompt by using **sampling** to generate model responses and **string match grading** to score the model responses against the output audio transcript and reference answer. Note that grading will be on text outputs from the sampled response. Graders that can grade audio input are not currently supported.\n",
9+
"This cookbook demonstrates how to use OpenAI's Evals framework for audio-based tasks. Leveraging the Evals API, we will grade model-generated responses to an audio message and prompt by using **sampling** to generate model responses and **model grading** to score the model responses against the output audio and reference answer. Note that grading will be on audio outputs from the sampled response.\n",
1010
"\n",
11-
"Before audio support was added, in order to evaluate audio conversations, they needed to be first transcribed to text. Now you can use the original audio and get samples from the model in audio as well. This will more accurately represent workflows such as a customer support agent where both the user and agent are using audio. For grading, we use the text transcript from the sampled audio so that we can leverage the existing suite of text graders. \n",
11+
"Before audio support was added, in order to evaluate audio conversations, they needed to be first transcribed to text. Now you can use the original audio and get samples from the model in audio as well. This will more accurately represent workflows such as a customer support agent where both the user and agent are using audio. For grading, we will use an audio model to grade the audio response with a model grader. We could alternatively or in combination use the text transcript from the sampled audio and leverage the existing suite of text graders. \n",
1212
"\n",
1313
"In this example, we will evaluate how well our model can:\n",
1414
"1. **Generate appropriate responses** to user prompts about an audio message\n",
@@ -29,7 +29,7 @@
2929
"outputs": [],
3030
"source": [
3131
"# Install required packages\n",
32-
"%pip install openai datasets pandas soundfile torch torchcodec --quiet"
32+
"%pip install openai datasets pandas soundfile torch torchcodec pydub jiwer --quiet"
3333
]
3434
},
3535
{
@@ -285,9 +285,9 @@
285285
"cell_type": "markdown",
286286
"metadata": {},
287287
"source": [
288-
"For our testing criteria, we set up our grader config. In this example, it is a simple string_check grader that takes in the official answer and sampled model response (in the `sample` namespace), and then outputs a score between 0 and 1 based if the model response contains the reference answer. The response contains both audio and the text transcript of the audio. We will use the text transcript in the grader. For more info on graders, visit [API Grader docs](https://platform.openai.com/docs/api-reference/graders). \n",
288+
"For our testing criteria, we set up our grader config. In this example, we use a score_model grader that takes in the official answer and sampled model response (in the `sample` namespace), and then outputs a score of 0 or 1 based if the model response matches the official answer. The response contains both audio and the text transcript of the audio. We will use the audio in the grader. For more info on graders, visit [API Grader docs](https://platform.openai.com/docs/api-reference/graders). \n",
289289
"\n",
290-
"Getting both the data and the grader right are key for an effective evaluation. While this example uses a simple string check grader, a more powerful model grader could be used instead and you will likely want to iteratively refine the prompts for your graders. "
290+
"Getting both the data and the grader right are key for an effective evaluation. You will likely want to iteratively refine the prompts for your graders. "
291291
]
292292
},
293293
{
@@ -296,13 +296,48 @@
296296
"metadata": {},
297297
"outputs": [],
298298
"source": [
299+
"grader_config = {\n",
300+
" \"type\": \"score_model\",\n",
301+
" \"name\": \"Reference answer audio model grader\",\n",
302+
" \"model\": \"gpt-4o-audio-preview\",\n",
303+
" \"input\": [\n",
304+
" {\n",
305+
" \"role\": \"user\",\n",
306+
" \"content\": [\n",
307+
" {\n",
308+
" \"type\": \"input_text\",\n",
309+
" \"text\": \"Evaluate this audio clip to see if it reaches the same conclusion as the reference answer. Score the answer a 1 if it does, 0 if it does not. Reference answer: {{item.official_answer}}\",\n",
310+
" },\n",
311+
" {\n",
312+
" \"type\": \"input_audio\",\n",
313+
" \"input_audio\": {\n",
314+
" \"data\": \"{{ sample.output_audio.data }}\",\n",
315+
" \"format\": \"wav\",\n",
316+
" },\n",
317+
" },\n",
318+
" ],\n",
319+
" },\n",
320+
" ],\n",
321+
" \"range\": [0, 1],\n",
322+
" \"pass_threshold\": 0.9,\n",
323+
"}"
324+
]
325+
},
326+
{
327+
"cell_type": "markdown",
328+
"metadata": {},
329+
"source": [
330+
"Alternatively we could use a string_check grader that takes in the official answer and sampled model response (in the `sample` namespace), and then outputs a score between 0 and 1 based on if the model response contains the reference answer. The response contains both audio and the text transcript of the audio. We will use the text transcript in the grader. \n",
331+
"\n",
332+
"```python\n",
299333
"grader_config = {\n",
300334
" \"type\": \"string_check\",\n",
301335
" \"name\": \"String check grader\",\n",
302336
" \"input\": \"{{sample.output_text}}\",\n",
303337
" \"reference\": \"{{item.official_answer}}\",\n",
304338
" \"operation\": \"ilike\"\n",
305-
"}"
339+
"}\n",
340+
"```"
306341
]
307342
},
308343
{
@@ -401,8 +436,9 @@
401436
" },\n",
402437
" \"input_messages\": {\n",
403438
" \"type\": \"template\", \n",
404-
" \"template\": sampling_messages}\n",
405-
" }\n",
439+
" \"template\": sampling_messages},\n",
440+
" \"modalities\": [\"audio\", \"text\"],\n",
441+
" },\n",
406442
" )"
407443
]
408444
},
@@ -479,10 +515,9 @@
479515
"source": [
480516
"## Conclusion\n",
481517
"\n",
482-
"In this cookbook, we covered a workflow for evaluating native audio inputs to a model using the OpenAI Evals APIs. We demonstrated using a simple text grader to grade the text transcript of the audio response.\n",
518+
"In this cookbook, we covered a workflow for evaluating native audio inputs to a model using the OpenAI Evals APIs. We demonstrated using a score model grader to grade the audio response.\n",
483519
"### Next steps\n",
484520
"- Convert this example to your use case. \n",
485-
"- Try using model based graders for additional flexibility in grading.\n",
486521
"- If you have large audio clips, try using the [uploads API](https://platform.openai.com/docs/api-reference/uploads/create) for support up to 8 GB.\n"
487522
]
488523
}

0 commit comments

Comments
 (0)