Skip to content

Commit ae651f1

Browse files
Merge pull request #7391 from mrbullwinkle/mrb_09_30_2025_freshness_006
[Azure OpenAI] Freshness 006
2 parents d88424e + 704cfc8 commit ae651f1

File tree

1 file changed

+5
-30
lines changed

1 file changed

+5
-30
lines changed

articles/ai-foundry/openai/faq.yml

Lines changed: 5 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: faq
9-
ms.date: 07/02/2025
9+
ms.date: 09/30/2025
1010
ms.author: mbullwin
1111
author: mrbullwinkle
1212
title: Azure OpenAI frequently asked questions
@@ -28,14 +28,6 @@ sections:
2828
Does Azure OpenAI work with the latest Python library released by OpenAI (version>=1.0)?
2929
answer: |
3030
Azure OpenAI is supported by the latest release of the [OpenAI Python library (version>=1.0)](https://pypi.org/project/openai/). However, it's important to note migration of your codebase using `openai migrate` is not supported and will not work with code that targets Azure OpenAI.
31-
- question: |
32-
I can't find GPT-4 Turbo Preview, where is it?
33-
answer:
34-
GPT-4 Turbo Preview is the `gpt-4` (1106-preview) model. To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **1106-preview**. To check which regions this model is available, refer to the [models page](./concepts/models.md).
35-
- question: |
36-
Does Azure OpenAI support GPT-4?
37-
answer: |
38-
Azure OpenAI supports the latest GPT-4 models. It supports both GPT-4 and GPT-4-32K.
3931
- question: |
4032
How do the capabilities of Azure OpenAI compare to OpenAI?
4133
answer: |
@@ -46,10 +38,6 @@ sections:
4638
Does Azure OpenAI support VNETs and Private Endpoints?
4739
answer: |
4840
Yes, Azure OpenAI supports VNETs and Private Endpoints. To learn more, consult the [virtual networking guidance](../../ai-services/cognitive-services-virtual-networks.md?context=/azure/ai-foundry/openai/context/context).
49-
- question: |
50-
Do the GPT-4 models currently support image input?
51-
answer: |
52-
No, GPT-4 is designed by OpenAI to be multimodal, but currently only text input and output are supported.
5341
- question: |
5442
How do I apply for new use cases?
5543
answer: |
@@ -59,18 +47,13 @@ sections:
5947
answer: |
6048
This error typically occurs when you try to send a batch of text to embed in a single API request as an array. Currently Azure OpenAI only supports arrays of embeddings with multiple inputs for the `text-embedding-ada-002` Version 2 model. This model version supports an array consisting of up to 16 inputs per API request. The array can be up to 8,191 tokens in length when using the text-embedding-ada-002 (Version 2) model.
6149
- question: |
62-
Where can I read about better ways to use Azure OpenAI to get the responses I want from the service?
50+
When I ask the which model it's running, it tells me it's running a different version. Why does this happen?
6351
answer: |
64-
Check out our [introduction to prompt engineering](./concepts/prompt-engineering.md). While these models are powerful, their behavior is also very sensitive to the prompts they receive from the user. This makes prompt construction an important skill to develop. After you've completed the introduction, check out our article on [system messages](./concepts/advanced-prompt-engineering.md).
65-
66-
- question: |
67-
When I ask GPT-4 which model it's running, it tells me it's running GPT-3. Why does this happen?
68-
answer: |
69-
Azure OpenAI models (including GPT-4) being unable to correctly identify what model is running is expected behavior.
52+
Azure OpenAI models being unable to correctly identify what model is running is expected behavior.
7053
7154
**Why does this happen?**
7255
73-
Ultimately, the model is performing next [token](/semantic-kernel/prompt-engineering/tokens) prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure AI Foundry** > **Management** > **Deployments** > and consult the model name column to confirm what model is currently associated with a given deployment name.
56+
Ultimately, the model is performing next token prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure AI Foundry** > **Deployments** or **Models + endpoints** > and consult the model name column to confirm what model is currently associated with a given deployment name.
7457
7558
The questions, "What model are you running?" or "What is the latest model from OpenAI?" produce similar quality results to asking the model what the weather will be today. It might return the correct result, but purely by chance. On its own, the model has no real-world information other than what was part of its training/training data. In the case of GPT-4, as of August 2023 the underlying training data goes only up to September 2021. GPT-4 wasn't released until March 2023, so barring OpenAI releasing a new version with updated training data, or a new version that is fine-tuned to answer those specific questions, it's expected behavior for GPT-4 to respond that GPT-3 is the latest model release from OpenAI.
7659
@@ -98,7 +81,7 @@ sections:
9881
9982
The frequency that a given piece of information appeared in the training data can also impact the likelihood that the model will respond in a certain way.
10083
101-
Asking the latest GPT-4 Turbo Preview model about something that changed more recently like "Who is the prime minister of New Zealand?", is likely to result in the fabricated response `Jacinda Ardern`. However, asking the model "When did `Jacinda Ardern` step down as prime minister?" Tends to yield an accurate response which demonstrates training data knowledge going to at least January of 2023.
84+
Asking the model about something that changed more recently like "Who is the prime minister of New Zealand?", is likely to result in the fabricated response `Jacinda Ardern`. However, asking the model "When did `Jacinda Ardern` step down as prime minister?" Tends to yield an accurate response which demonstrates training data knowledge going to at least January of 2023.
10285
10386
So while it is possible to probe the model with questions to guess its training data knowledge cutoff, the [model's page](./concepts/models.md) is the best place to check a model's knowledge cutoff.
10487
- question: |
@@ -207,14 +190,6 @@ sections:
207190
No, quota Tokens-Per-Minute (TPM) allocation isn't related to the max input token limit of a model. Model input token limits are defined in the [models table](./concepts/models.md) and aren't impacted by changes made to TPM.
208191
- name: GPT-4 Turbo with Vision
209192
questions:
210-
- question: |
211-
Can I fine-tune the image capabilities in GPT-4?
212-
answer: |
213-
No, we don't support fine-tuning the image capabilities of GPT-4 at this time.
214-
- question: |
215-
Can I use GPT-4 to generate images?
216-
answer: |
217-
No, you can use `dall-e-3` to generate images and `gpt-4-vision-preview` to understand images.
218193
- question: |
219194
What type of files can I upload?
220195
answer: |

0 commit comments

Comments
 (0)