Skip to content

Commit 42b3781

Browse files
Merge pull request #259372 from mrbullwinkle/mrb_11_22_2023_FAQ_update
[Azure OpenAI] FAQ update
2 parents 5031f41 + a53489c commit 42b3781

File tree

1 file changed

+16
-2
lines changed

1 file changed

+16
-2
lines changed

articles/ai-services/openai/faq.yml

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ metadata:
77
manager: nitinme
88
ms.service: azure-ai-openai
99
ms.topic: faq
10-
ms.date: 11/15/2023
10+
ms.date: 11/22/2023
1111
ms.author: mbullwin
1212
author: mrbullwinkle
1313
title: Azure OpenAI Service frequently asked questions
@@ -25,7 +25,7 @@ sections:
2525
- question: |
2626
Does Azure OpenAI work with the latest Python library released by OpenAI (version>=1.0)?
2727
answer: |
28-
Azure OpenAI is supported by the latest release of the [OpenAI Python library (version>=1.0)](https://pypi.org/project/openai/). However, it is important to note migration of your codebase using `openai migrate` is not supported and will not work with code that targets Azure OpenAI.
28+
Azure OpenAI is supported by the latest release of the [OpenAI Python library (version>=1.0)](https://pypi.org/project/openai/). However, it's important to note migration of your codebase using `openai migrate` is not supported and will not work with code that targets Azure OpenAI.
2929
- question: |
3030
I can't find GPT-4 Turbo Preview?
3131
answer:
@@ -90,6 +90,20 @@ sections:
9090
If you wanted to help a GPT based model to accurately respond to the question "what model are you running?", you would need to provide that information to the model through techniques like [prompt engineering of the model's system message](/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions), [Retrieval Augmented Generation (RAG)](/azure/machine-learning/concept-retrieval-augmented-generation?view=azureml-api-2) which is the technique used by [Azure OpenAI on your data](/azure/ai-services/openai/concepts/use-your-data) where up-to-date information is injected to the system message at query time, or via [fine-tuning](/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-studio) where you could fine-tune specific versions of the model to answer that question in a certain way based on model version.
9191
9292
To learn more about how GPT models are trained and work we recommend watching [Andrej Karpathy's talk from Build 2023 on the state of GPT](https://www.youtube.com/watch?v=bZQun8Y4L2A).
93+
- question: |
94+
I asked the model when it's knowledge cutoff is and it gave me a different answer than what is on the Azure OpenAI model's page. Why does this happen?
95+
answer:
96+
This is expected behavior. The models aren't able to answer questions about themselves. If you want to know when the knowledge cutoff for the model's training data is, consult the [models page](./concepts/models.md).
97+
- question: |
98+
I asked the model a question about something that happened recently before the knowledge cutoff and it got the answer wrong. Why does this happen?
99+
answer: |
100+
This is expected behavior. First there's no guarantee that every recent event that has occurred was part of the model's training data. And even when information was part of the training data, without using additional techniques like Retrieval Augmented Generation (RAG) to help ground the model's responses there's always a chance of ungrounded responses occurring. Both Azure OpenAI's [use your data feature](./concepts/use-your-data.md) and [Bing Chat](https://www.microsoft.com/edge/features/bing-chat?form=MT00D8) use Azure OpenAI models combined with Retrieval Augmented Generation to help further ground model responses.
101+
102+
The frequency that a given piece of information appeared in the training data can also impact the likelihood that the model will respond in a certain way.
103+
104+
Asking the latest GPT-4 Turbo Preview model about something that changed more recently like "Who is the prime minister of New Zealand?", is likely to result in the fabricated response `Jacinda Ardern`. However, asking the model "When did `Jacinda Ardern` step down as prime minister?" Tends to yield an accurate response which demonstrates training data knowledge going to at least January of 2023.
105+
106+
So while it is possible to probe the model with questions to guess its training data knowledge cutoff, the [model's page](./concepts/models.md) is the best place to check a model's knowledge cutoff.
93107
- question: |
94108
Where do I access pricing information for legacy models which are no longer available for new deployments?
95109
answer: |

0 commit comments

Comments
 (0)