Skip to content

Commit 5401ce9

Browse files
Merge pull request #6303 from mrbullwinkle/mrb_07_30_2025_code_interpreter
[Azure OpenAI] [Responses API] Code Interpreter support
2 parents 3af24b8 + 97dfb70 commit 5401ce9

File tree

1 file changed

+110
-4
lines changed

1 file changed

+110
-4
lines changed

articles/ai-foundry/openai/how-to/responses.md

Lines changed: 110 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's new stateful Responses API.
55
author: mrbullwinkle
66
ms.author: mbullwin
77
manager: nitinme
8-
ms.date: 07/08/2025
8+
ms.date: 07/30/2025
99
ms.service: azure-ai-openai
1010
ms.topic: include
1111
ms.custom:
@@ -65,7 +65,7 @@ Not every model is available in the regions supported by the responses API. Chec
6565
> - Images can't be uploaded as a file and then referenced as input. Coming soon.
6666
>
6767
> There's a known issue with the following:
68-
> - PDF as an input file [is now supported](), but setting file upload purpose to `user_data` is not currently supported.
68+
> - PDF as an input file [is now supported](#file-input), but setting file upload purpose to `user_data` is not currently supported.
6969
> - Performance when background mode is used with streaming. The issue is expected to be resolved soon.
7070
7171
### Reference documentation
@@ -497,7 +497,6 @@ for event in response:
497497

498498
```
499499

500-
501500
## Function calling
502501

503502
The responses API supports function calling.
@@ -564,6 +563,113 @@ print(second_response.model_dump_json(indent=2))
564563

565564
```
566565

566+
## Code Interpreter
567+
568+
The Code Interpreter tool enables models to write and execute Python code in a secure, sandboxed environment. It supports a range of advanced tasks, including:
569+
570+
* Processing files with varied data formats and structures
571+
* Generating files that include data and visualizations (for example, graphs)
572+
* Iteratively writing and running code to solve problems—models can debug and retry code until successful
573+
* Enhancing visual reasoning in supported models (for example, o3, o4-mini) by enabling image transformations such as cropping, zooming, and rotation
574+
* This tool is especially useful for scenarios involving data analysis, mathematical computation, and code generation.
575+
576+
```bash
577+
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?api-version=preview \
578+
-H "Content-Type: application/json" \
579+
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
580+
-d '{
581+
"model": "gpt-4.1",
582+
"tools": [
583+
{ "type": "code_interpreter", "container": {"type": "auto"} }
584+
],
585+
"instructions": "You are a personal math tutor. When asked a math question, write and run code using the python tool to answer the question.",
586+
"input": "I need to solve the equation 3x + 11 = 14. Can you help me?"
587+
}'
588+
```
589+
590+
```python
591+
from openai import AzureOpenAI
592+
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
593+
594+
token_provider = get_bearer_token_provider(
595+
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
596+
)
597+
598+
client = AzureOpenAI(
599+
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
600+
azure_ad_token_provider=token_provider,
601+
api_version="preview"
602+
)
603+
604+
instructions = "You are a personal math tutor. When asked a math question, write and run code using the python tool to answer the question."
605+
606+
response = client.responses.create(
607+
model="gpt-4.1",
608+
tools=[
609+
{
610+
"type": "code_interpreter",
611+
"container": {"type": "auto"}
612+
}
613+
],
614+
instructions=instructions,
615+
input="I need to solve the equation 3x + 11 = 14. Can you help me?",
616+
)
617+
618+
print(response.output)
619+
```
620+
621+
### Containers
622+
623+
> [!IMPORTANT]
624+
> Code Interpreter has [additional charges](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) beyond the token based fees for Azure OpenAI usage. If your Responses API calls Code Interpreter simultaneously in two different threads, two code interpreter sessions are created. Each session is active by default for 1 hour with an idle timeout of 30 minutes.
625+
626+
The Code Interpreter tool requires a container—a fully sandboxed virtual machine where the model can execute Python code. Containers can include uploaded files or files generated during execution.
627+
628+
To create a container, specify `"container": { "type": "auto", "files": ["file-1", "file-2"] }` in the tool configuration when creating a new Response object. This automatically creates a new container or reuses an active one from a previous code_interpreter_call in the model’s context. The `code_interpreter_call` in the output of the APIwill contain the `container_id` that was generated. This container expires if it is not used for 20 minutes.
629+
630+
### File inputs and outputs
631+
632+
When running Code Interpreter, the model can create its own files. For example, if you ask it to construct a plot, or create a CSV, it creates these images directly on your container. It will cite these files in the annotations of its next message.
633+
634+
Any files in the model input get automatically uploaded to the container. You do not have to explicitly upload it to the container.
635+
636+
### Supported Files
637+
638+
|File format|MIME type|
639+
|---|---|
640+
|`.c`|text/x-c|
641+
|`.cs`|text/x-csharp|
642+
|`.cpp`|text/x-c++|
643+
|`.csv`|text/csv|
644+
|`.doc`|application/msword|
645+
|`.docx`|application/vnd.openxmlformats-officedocument.wordprocessingml.document|
646+
|`.html`|text/html|
647+
|`.java`|text/x-java|
648+
|`.json`|application/json|
649+
|`.md`|text/markdown|
650+
|`.pdf`|application/pdf|
651+
|`.php`|text/x-php|
652+
|`.pptx`|application/vnd.openxmlformats-officedocument.presentationml.presentation|
653+
|`.py`|text/x-python|
654+
|`.py`|text/x-script.python|
655+
|`.rb`|text/x-ruby|
656+
|`.tex`|text/x-tex|
657+
|`.txt`|text/plain|
658+
|`.css`|text/css|
659+
|`.js`|text/JavaScript|
660+
|`.sh`|application/x-sh|
661+
|`.ts`|application/TypeScript|
662+
|`.csv`|application/csv|
663+
|`.jpeg`|image/jpeg|
664+
|`.jpg`|image/jpeg|
665+
|`.gif`|image/gif|
666+
|`.pkl`|application/octet-stream|
667+
|`.png`|image/png|
668+
|`.tar`|application/x-tar|
669+
|`.xlsx`|application/vnd.openxmlformats-officedocument.spreadsheetml.sheet|
670+
|`.xml`|application/xml or "text/xml"|
671+
|`.zip`|application/zip|
672+
567673
## List input items
568674

569675
```python
@@ -1061,7 +1167,7 @@ print(response.output_text)
10611167

10621168
## Background tasks
10631169

1064-
Background mode allows you to run long-running tasks asynchronously using models like o3 and o1-pro. This is especially useful for complex reasoning tasks that may take several minutes to complete, such as those handled by agents like Codex or Deep Research.
1170+
Background mode allows you to run long-running tasks asynchronously using models like o3 and o1-pro. This is especially useful for complex reasoning tasks that can take several minutes to complete, such as those handled by agents like Codex or Deep Research.
10651171

10661172
By enabling background mode, you can avoid timeouts and maintain reliability during extended operations. When a request is sent with `"background": true`, the task is processed asynchronously, and you can poll for its status over time.
10671173

0 commit comments

Comments
 (0)