You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/assistants.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ Assistants API supports persistent automatically managed threads. This means tha
33
33
> [!TIP]
34
34
> There is no additional [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) or [quota](../quotas-limits.md) for using Assistants unless you use the [code interpreter](../how-to/code-interpreter.md) or [file search](../how-to/file-search.md) tools.
35
35
36
-
Assistants API is built on the same capabilities that power OpenAI’s GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure OpenAI Studio, AI Studio, or start building with the API.
36
+
Assistants API is built on the same capabilities that power OpenAI’s GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure AI Studio or start building with the API.
37
37
38
38
> [!IMPORTANT]
39
39
> Retrieving untrusted data using Function calling, Code Interpreter or File Search with file input, and Assistant Threads functionalities could compromise the security of your Assistant, or the application that uses the Assistant. Learn about mitigation approaches [here](https://aka.ms/oai/assistant-rai).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/content-filter.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -842,15 +842,15 @@ Customers must understand that while the feature improves latency, it's a trade-
842
842
843
843
**Customer Copyright Commitment**: Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage.
844
844
845
-
To enable Asynchronous Filter in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
845
+
To enable Asynchronous Filter in Azure AI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure OpenAI Studio (as part of a content filtering configuration, applied at the deployment level) |
853
+
| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure AI Studio (as part of a content filtering configuration, applied at the deployment level) |
854
854
|Modality and availability |Text; all GPT models |Text; all GPT models |
855
855
|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
856
856
|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000-character increments) |
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/gpt-with-vision.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,13 +77,13 @@ This section describes the limitations of GPT-4 Turbo with Vision.
77
77
78
78
-**Maximum input image size**: The maximum size for input images is restricted to 20 MB.
79
79
-**Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.
80
-
-**Image chat restriction**: When you upload images in Azure OpenAI Studio or the API, there is a limit of 10 images per chat call.
80
+
-**Image chat restriction**: When you upload images in Azure AI Studio or the API, there is a limit of 10 images per chat call.
81
81
82
82
### Video support
83
83
84
84
-**Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
85
-
-**Video file limits**: Both MP4 and MOV file types are supported. In Azure OpenAI Studio, videos must be less than 3 minutes long. When you use the API there is no such limitation.
86
-
-**Prompt limits**: Video prompts only contain one video and no images. In Azure OpenAI Studio, you can clear the session to try another video or images.
85
+
-**Video file limits**: Both MP4 and MOV file types are supported. In Azure AI Studio, videos must be less than 3 minutes long. When you use the API there is no such limitation.
86
+
-**Prompt limits**: Video prompts only contain one video and no images. In Azure AI Studio, you can clear the session to try another video or images.
87
87
-**Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.
88
88
-**Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -511,7 +511,7 @@ These models can only be used with Embedding API requests.
511
511
512
512
## Assistants (Preview)
513
513
514
-
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2. You can use [global standard models](#global-standard-model-availability) if they are supported in the regions listed below.
514
+
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, and Azure AI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2. You can use [global standard models](#global-standard-model-availability) if they are supported in the regions listed below.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/faq.yml
+10-10Lines changed: 10 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -65,29 +65,29 @@ sections:
65
65
answer: |
66
66
Check out our [introduction to prompt engineering](./concepts/prompt-engineering.md). While these models are powerful, their behavior is also very sensitive to the prompts they receive from the user. This makes prompt construction an important skill to develop. After you've completed the introduction, check out our article on [system messages](./concepts/advanced-prompt-engineering.md).
67
67
- question: |
68
-
My guest account has been given access to an Azure OpenAI resource, but I'm unable to access that resource in the Azure OpenAI Studio. How do I enable access?
68
+
My guest account has been given access to an Azure OpenAI resource, but I'm unable to access that resource in the Azure AI Studio. How do I enable access?
69
69
answer: |
70
-
This is expected behavior when using the default sign-in experience for the [Azure OpenAI Studio](https://oai.azure.com).
70
+
This is expected behavior when using the default sign-in experience for the [Azure AI Studio](https://ai.azure.com).
71
71
72
-
To access Azure OpenAI Studio from a guest account that has been granted access to an Azure OpenAI resource:
72
+
To access Azure AI Studio from a guest account that has been granted access to an Azure OpenAI resource:
73
73
74
-
1. Open a private browser session and then navigate to [https://oai.azure.com](https://oai.azure.com).
74
+
1. Open a private browser session and then navigate to [https://ai.azure.com](https://ai.azure.com).
75
75
2. Rather than immediately entering your guest account credentials instead select `Sign-in options`
76
76
3. Now select **Sign in to an organization**
77
77
4. Enter the domain name of the organization that granted your guest account access to the Azure OpenAI resource.
78
78
5. Now sign-in with your guest account credentials.
79
79
80
-
You should now be able to access the resource via the Azure OpenAI Studio.
80
+
You should now be able to access the resource via the Azure AI Studio.
81
81
82
-
Alternatively if you're signed into the [Azure portal](https://portal.azure.com) from the Azure OpenAI resource's Overview pane you can select **Go to Azure OpenAI Studio** to automatically sign in with the appropriate organizational context.
82
+
Alternatively if you're signed into the [Azure portal](https://portal.azure.com) from the Azure OpenAI resource's Overview pane you can select **Go to Azure AI Studio** to automatically sign in with the appropriate organizational context.
83
83
- question: |
84
84
When I ask GPT-4 which model it's running, it tells me it's running GPT-3. Why does this happen?
85
85
answer: |
86
86
Azure OpenAI models (including GPT-4) being unable to correctly identify what model is running is expected behavior.
87
87
88
88
**Why does this happen?**
89
89
90
-
Ultimately, the model is performing next [token](/semantic-kernel/prompt-engineering/tokens) prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure OpenAI Studio** > **Management** > **Deployments** > and consult the model name column to confirm what model is currently associated with a given deployment name.
90
+
Ultimately, the model is performing next [token](/semantic-kernel/prompt-engineering/tokens) prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure AI Studio** > **Management** > **Deployments** > and consult the model name column to confirm what model is currently associated with a given deployment name.
91
91
92
92
The questions, "What model are you running?" or "What is the latest model from OpenAI?" produce similar quality results to asking the model what the weather will be today. It might return the correct result, but purely by chance. On its own, the model has no real-world information other than what was part of its training/training data. In the case of GPT-4, as of August 2023 the underlying training data goes only up to September 2021. GPT-4 wasn't released until March 2023, so barring OpenAI releasing a new version with updated training data, or a new version that is fine-tuned to answer those specific questions, it's expected behavior for GPT-4 to respond that GPT-3 is the latest model release from OpenAI.
93
93
@@ -163,7 +163,7 @@ sections:
163
163
answer: |
164
164
Consult the Azure OpenAI [model availability guide](./concepts/models.md#model-summary-table-and-region-availability) for region availability.
165
165
- question: |
166
-
How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio.
166
+
How do I enable fine-tuning? Create a custom model is greyed out in Azure AI Studio.
167
167
answer: |
168
168
In order to successfully access fine-tuning, you need Cognitive Services OpenAI Contributor assigned. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information, please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
169
169
- question: |
@@ -298,7 +298,7 @@ sections:
298
298
- question: |
299
299
Will my web app be overwritten when I deploy the app again from the Azure AI Studio?
300
300
answer:
301
-
Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure AI Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure OpenAI Studio without any change to the appearance or functionality.
301
+
Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure AI Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure AI Studio without any change to the appearance or functionality.
302
302
- name: Using your data
303
303
questions:
304
304
- question: |
@@ -346,7 +346,7 @@ sections:
346
346
answer:
347
347
You must send queries in the same language of your data. Your data can be in any of the languages supported by [Azure AI Search](/azure/search/search-language-support).
348
348
- question: |
349
-
If Semantic Search is enabled for my Azure AI Search resource, will it be automatically applied to Azure OpenAI on your data in the Azure OpenAI Studio?
349
+
If Semantic Search is enabled for my Azure AI Search resource, will it be automatically applied to Azure OpenAI on your data in the Azure AI Studio?
350
350
answer:
351
351
When you select "Azure AI Search" as the data source, you can choose to apply semantic search.
352
352
If you select "Azure Blob Container" or "Upload files" as the data source, you can create the index as usual. Afterwards you would reingest the data using the "Azure AI Search" option to select the same index and apply Semantic Search. You will then be ready to chat on your data with semantic search applied.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/batch.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,7 +86,7 @@ The following aren't currently supported:
86
86
87
87
In the Studio UI the deployment type will appear as `Global-Batch`.
88
88
89
-
:::image type="content" source="../media/how-to/global-batch/global-batch.png" alt-text="Screenshot that shows the model deployment dialog in Azure OpenAI Studio with Global-Batch deployment type highlighted." lightbox="../media/how-to/global-batch/global-batch.png":::
89
+
:::image type="content" source="../media/how-to/global-batch/global-batch.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Studio with Global-Batch deployment type highlighted." lightbox="../media/how-to/global-batch/global-batch.png":::
90
90
91
91
> [!TIP]
92
92
> We recommend enabling **dynamic quota** for all global batch model deployments to help avoid job failures due to insufficient enqueued token quota. Dynamic quota allows your deployment to opportunistically take advantage of more quota when extra capacity is available. When dynamic quota is set to off, your deployment will only be able to process requests up to the enqueued token limit that was defined when you created the deployment.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/completions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Azure OpenAI Service provides a **completion endpoint** that can be used for a w
20
20
> [!IMPORTANT]
21
21
> Unless you have a specific use case that requires the completions endpoint, we recommend instead using the [chat completions endpoint](./chatgpt.md) which allows you to take advantage of the latest models like GPT-4o, GPT-4o mini, and GPT-4 Turbo.
22
22
23
-
The best way to start exploring completions is through the playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one:
23
+
The best way to start exploring completions is through the playground in [Azure AI Studio](https://ai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one:
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/deployment-types.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,7 +119,7 @@ You can use the following policy to disable access to Azure OpenAI global standa
119
119
120
120
## Deploy models
121
121
122
-
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure OpenAI Studio with three deployment types highlighted." lightbox="../media/deployment-types/deploy-models-new.png":::
122
+
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Studio with three deployment types highlighted." lightbox="../media/deployment-types/deploy-models-new.png":::
123
123
124
124
To learn about creating resources and deploying models refer to the [resource creation guide](./create-resource.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/quota.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,11 +44,11 @@ The flexibility to distribute TPM globally within a subscription and region has
44
44
45
45
When you create a model deployment, you have the option to assign Tokens-Per-Minute (TPM) to that deployment. TPM can be modified in increments of 1,000, and will map to the TPM and RPM rate limits enforced on your deployment, as discussed above.
46
46
47
-
To create a new deployment from within the Azure AI Studio under **Shared Resources**select **Deployments** > **Deploy model** > **Deploy base model** > **Select Model** > **Confirm**.
47
+
To create a new deployment from within the Azure AI Studio select **Deployments** > **Deploy model** > **Deploy base model** > **Select Model** > **Confirm**.
48
48
49
49
:::image type="content" source="../media/quota/deployment-new.png" alt-text="Screenshot of the deployment UI of Azure AI Studio" lightbox="../media/quota/deployment-new.png":::
50
50
51
-
Post deployment you can adjust your TPM allocation by selecting **Edit** under **Shared resources** > **Deployments** in Azure OpenAI Studio. You can also modify this selection within the new quota management experience under **Management** > **Quotas**.
51
+
Post deployment you can adjust your TPM allocation by selecting and editing your model from the **Deployments**page in Azure AI Studio. You can also modify this setting from the **Management** > **Model quota** page.
52
52
53
53
> [!IMPORTANT]
54
54
> Quotas and limits are subject to change, for the most up-date-information consult our [quotas and limits article](../quotas-limits.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/work-with-code.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ You can use Codex for a variety of tasks including:
28
28
29
29
## How to use completions models with code
30
30
31
-
Here are a few examples of using Codex that can be tested in [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
31
+
Here are a few examples of using Codex that can be tested in the [Azure AI Studio](https://ai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
0 commit comments