Skip to content

Commit b568543

Browse files
Merge pull request #2450 from mrbullwinkle/mrb_01_22_2025_reasoning
[Azure OpenAI] Reasoning models update
2 parents 75adbec + aaf6f78 commit b568543

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,17 +13,13 @@ manager: nitinme
1313
# Use vision-enabled chat models
1414

1515

16-
Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are GPT-4 Turbo with Vision, GPT-4o, and GPT-4o-mini.
16+
Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are [o1](./reasoning.md), GPT-4o, and GPT-4o-mini, GPT-4 Turbo with Vision.
1717

1818
The vision-enabled models answer general questions about what's present in the images you upload.
1919

2020
> [!TIP]
2121
> To use vision-enabled models, you call the Chat Completion API on a supported model that you have deployed. If you're not familiar with the Chat Completion API, see the [Vision-enabled chat how-to guide](/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions).
2222
23-
## GPT-4 Turbo model upgrade
24-
25-
[!INCLUDE [GPT-4 Turbo](../includes/gpt-4-turbo.md)]
26-
2723
## Call the Chat Completion APIs
2824

2925
The following command shows the most basic way to use the GPT-4 Turbo with Vision model with code. If this is your first time using these models programmatically, we recommend starting with our [GPT-4 Turbo with Vision quickstart](../gpt-v-quickstart.md).
@@ -39,8 +35,6 @@ Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
3935
- `Content-Type`: application/json
4036
- `api-key`: {API_KEY}
4137

42-
43-
4438
**Body**:
4539
The following is a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing text and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
4640

@@ -368,6 +362,11 @@ Every response includes a `"finish_reason"` field. It has the following possible
368362
```
369363
-->
370364

365+
## GPT-4 Turbo model upgrade
366+
367+
[!INCLUDE [GPT-4 Turbo](../includes/gpt-4-turbo.md)]
368+
369+
371370
## Next steps
372371

373372
* [Learn more about Azure OpenAI](../overview.md).

articles/ai-services/openai/how-to/reasoning.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,10 @@ Once access has been granted, you'll need to create a deployment for each model.
4747
| **[Structured Outputs](./structured-outputs.md)** || - | - |
4848
| **[Context Window](../concepts/models.md#o1-and-o1-mini-models-limited-access)** | Input: 200,000 <br> Output: 100,000 | Input: 128,000 <br> Output: 32,768 | Input: 128,000 <br> Output: 65,536 |
4949
| **[Reasoning effort](#reasoning-effort)** || - | - |
50-
| System Messages | - | - | - |
50+
| **[Vision Support](./gpt-with-vision.md)** | | - | - |
5151
| Functions/Tools || - | - |
5252
| `max_completion_tokens` ||||
53+
| System Messages | - | - | - |
5354

5455
**o1 series** models will only work with the `max_completion_tokens` parameter.
5556

0 commit comments

Comments
 (0)