Skip to content

Commit a4bbcd7

Browse files
committed
added more details to the descriptions
1 parent 7cfb312 commit a4bbcd7

File tree

1 file changed

+25
-9
lines changed

1 file changed

+25
-9
lines changed

examples/evaluation/use-cases/EvalsAPI_Audio_Inputs.ipynb

Lines changed: 25 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,9 @@
66
"source": [
77
"# Evals API: Audio Inputs\n",
88
"\n",
9-
"This cookbook demonstrates how to use OpenAI's Evals framework for audio-based tasks. Leveraging the Evals API, we will grade model-generated responses to an audio message and prompt by using **sampling** to generate model responses and **model grading** (LLM as a Judge) to score the model responses against the output audio transcript, prompt, and reference answer. Note that grading will be on text outputs from the sampled response. Graders that can grade audio input are not currently supported.\n",
9+
"This cookbook demonstrates how to use OpenAI's Evals framework for audio-based tasks. Leveraging the Evals API, we will grade model-generated responses to an audio message and prompt by using **sampling** to generate model responses and **string match grading** to score the model responses against the output audio transcript and reference answer. Note that grading will be on text outputs from the sampled response. Graders that can grade audio input are not currently supported.\n",
10+
"\n",
11+
"Before audio support was added, in order to evaluate audio conversations, they needed to be first transcribed to text. Now you can use the original audio and get samples from the model in audio as well. This will more accurately repesent workflows such as a customer suppor agent where both the user and agent are using aduio. For grading, we use the text transcript from the sampled audio so that we can leverage the existig suite of text graders. \n",
1012
"\n",
1113
"In this example, we will evaluate how well our model can:\n",
1214
"1. **Generate appropriate responses** to user prompts about an audio message\n",
@@ -27,7 +29,7 @@
2729
"outputs": [],
2830
"source": [
2931
"# Install required packages\n",
30-
"!pip install openai datasets pandas soundfile torch torchcodec --quiet"
32+
"%pip install openai datasets pandas soundfile torch torchcodec --quiet"
3133
]
3234
},
3335
{
@@ -55,7 +57,7 @@
5557
"source": [
5658
"## Dataset Preparation\n",
5759
"\n",
58-
"We use the [big_bench_audio](https://huggingface.co/datasets/ArtificialAnalysis/big_bench_audio) dataset that's hosted on Hugging Face. Big Bench Audio is an audio version of a subset of Big Bench Hard questions. The dataset can be used for evaluating the reasoning capabilities of models that support audio input."
60+
"We use the [big_bench_audio](https://huggingface.co/datasets/ArtificialAnalysis/big_bench_audio) dataset that's hosted on Hugging Face. Big Bench Audio is an audio version of a subset of Big Bench Hard questions. The dataset can be used for evaluating the reasoning capabilities of models that support audio input. It contains an audio clip describing a logic problem, a category and an offical answer."
5961
]
6062
},
6163
{
@@ -86,11 +88,9 @@
8688
"def get_base64(audio_path_or_datauri: str) -> str:\n",
8789
" if audio_path_or_datauri.startswith(\"data:\"):\n",
8890
" # Already base64, just strip prefix\n",
89-
" print(\"Already base64, just strip prefix\")\n",
9091
" return audio_path_or_datauri.split(\",\", 1)[1]\n",
9192
" else:\n",
9293
" # It's a real file path\n",
93-
" print(\"It's a real file path\")\n",
9494
" with open(audio_path_or_datauri, \"rb\") as f:\n",
9595
" return base64.b64encode(f.read()).decode(\"ascii\")\n",
9696
"\n",
@@ -154,6 +154,7 @@
154154
"evals_data_source = []\n",
155155
"audio_base64 = None\n",
156156
"\n",
157+
"# Will use the first 3 examples for testing\n",
157158
"for example in dataset[\"train\"].select(range(3)):\n",
158159
" audio_val = example[\"audio\"]\n",
159160
" try:\n",
@@ -205,7 +206,7 @@
205206
"outputs": [],
206207
"source": [
207208
"client = OpenAI(\n",
208-
" api_key=os.getenv(\"OPENAI_API_KEY\")\n",
209+
" api_key=os.getenv(\"OPENAI_API_KEY_DISTILLATION\")\n",
209210
")"
210211
]
211212
},
@@ -300,7 +301,7 @@
300301
" \"name\": \"String check grader\",\n",
301302
" \"input\": \"{{sample.output_text}}\",\n",
302303
" \"reference\": \"{{item.official_answer}}\",\n",
303-
" \"operation\": \"like\"\n",
304+
" \"operation\": \"ilike\"\n",
304305
"}"
305306
]
306307
},
@@ -349,7 +350,15 @@
349350
"sampling_messages = [\n",
350351
" {\n",
351352
" \"role\": \"system\",\n",
352-
" \"content\": \"You are a helpful assistant that can answer questions with the audio input. You will be given an audio input and a question. You will need to answer the question based on the audio input.\"\n",
353+
" \"content\": \"You are a helpful and obedient assistant that can answer questions with audio input. You will be given an audio input containing a question and instructions on exactly how to answer. For example, if the user asks for a single word response, then you should only reply with a single word answer.\"\n",
354+
" },\n",
355+
" {\n",
356+
" \"role\": \"user\",\n",
357+
" \"type\": \"message\",\n",
358+
" \"content\": {\n",
359+
" \"type\": \"input_text\",\n",
360+
" \"text\": \"Answer the following question by replying with a single word answer: 'valid' or 'invalid'.\"\n",
361+
" }\n",
353362
" },\n",
354363
" {\n",
355364
" \"role\": \"user\",\n",
@@ -387,6 +396,9 @@
387396
" \"id\": file.id\n",
388397
" },\n",
389398
" \"model\": \"gpt-4o-audio-preview\", # model used to generate the response; check that the model you use supports audio inputs\n",
399+
" \"sampling_params\": {\n",
400+
" \"temperature\": 0.0,\n",
401+
" },\n",
390402
" \"input_messages\": {\n",
391403
" \"type\": \"template\", \n",
392404
" \"template\": sampling_messages}\n",
@@ -467,7 +479,11 @@
467479
"source": [
468480
"## Conclusion\n",
469481
"\n",
470-
"In this cookbook, we covered a workflow for evaluating native audio inputs to a model using the OpenAI Evals API's. We could additionally add model based graders for additional flexibility in grading in future.\n"
482+
"In this cookbook, we covered a workflow for evaluating native audio inputs to a model using the OpenAI Evals API's. We demonstrated using a simple text grader to grade the text transcript of the audio response.\n",
483+
"### Next steps\n",
484+
"- Convert this example to your use case. \n",
485+
"- Try using model based graders for additional flexibility in grading.\n",
486+
"- If you have large audio clips, try using the [uploads API](https://platform.openai.com/docs/api-reference/uploads/create) for support up to 8 GB.\n"
471487
]
472488
}
473489
],

0 commit comments

Comments
 (0)