You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/faq.yml
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ metadata:
6
6
manager: nitinme
7
7
ms.service: azure-ai-openai
8
8
ms.topic: faq
9
-
ms.date: 03/27/2025
9
+
ms.date: 07/02/2025
10
10
ms.author: mbullwin
11
11
author: mrbullwinkle
12
12
title: Azure OpenAI frequently asked questions
@@ -76,7 +76,7 @@ sections:
76
76
77
77
If you wanted to help a GPT based model to accurately respond to the question "what model are you running?", you would need to provide that information to the model through techniques like [prompt engineering of the model's system message](/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions), [Retrieval Augmented Generation (RAG)](/azure/machine-learning/concept-retrieval-augmented-generation?view=azureml-api-2) which is the technique used by [Azure OpenAI on your data](/azure/ai-services/openai/concepts/use-your-data) where up-to-date information is injected to the system message at query time, or via [fine-tuning](/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-studio) where you could fine-tune specific versions of the model to answer that question in a certain way based on model version.
78
78
79
-
To learn more about how GPT models are trained and work we recommend watching [Andrej Karpathy's talk from Build 2023 on the state of GPT](https://www.youtube.com/watch?v=bZQun8Y4L2A).
79
+
To learn more about how GPT models are trained and work we recommend watching [Andrej Apathy's talk from Build 2023 on the state of GPT](https://www.youtube.com/watch?v=bZQun8Y4L2A).
80
80
81
81
- question: |
82
82
How can I get the model to respond in a specific language?
@@ -112,7 +112,7 @@ sections:
112
112
- question: |
113
113
How do I fix Server error (500): Unexpected special token
114
114
answer: |
115
-
This is a a known issue. You can minimize the occurrence of these errors by reducing the temperature of your prompts to less than 1 and ensuring you're using a client with retry logic. Reattempting the request often results in a successful response.
115
+
This is a known issue. You can minimize the occurrence of these errors by reducing the temperature of your prompts to less than 1 and ensuring you're using a client with retry logic. Reattempting the request often results in a successful response.
116
116
117
117
If reducing temperature to less than 1 does not reduce the frequency of this error an alternative workaround is set presence/frequency penalties and logit biases to their default values. In some cases, it may help to set `top_p` to a non-default, lower value to encourage the model to avoid sampling tokens with lower probability tokens.
118
118
@@ -269,7 +269,7 @@ sections:
269
269
answer: |
270
270
No. Currently Assistants supports only local files uploaded to the Assistants-managed storage. You cannot use your private storage account with Assistants.
271
271
- question: |
272
-
Does Assistants support customer-managed key encryption (CMK)?
272
+
Does the Assistants feature support customer-managed key encryption (CMK)?
273
273
answer: |
274
274
Today we support CMK for Threads and Files in Assistants. See the [What's new page](./whats-new.md#customer-managed-key-cmk-support-for-assistants) for available regions for this feature.
0 commit comments