Skip to content

Commit 2ceb906

Browse files
Merge pull request #1052 from eric-urban/eur/ai-studio-scrub
AI services via studio updates
2 parents 20b5e67 + 7fa2ab4 commit 2ceb906

33 files changed

+64
-65
lines changed

articles/ai-services/openai/concepts/assistants.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Assistants API supports persistent automatically managed threads. This means tha
3333
> [!TIP]
3434
> There is no additional [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) or [quota](../quotas-limits.md) for using Assistants unless you use the [code interpreter](../how-to/code-interpreter.md) or [file search](../how-to/file-search.md) tools.
3535
36-
Assistants API is built on the same capabilities that power OpenAI’s GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure OpenAI Studio, AI Studio, or start building with the API.
36+
Assistants API is built on the same capabilities that power OpenAI’s GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure AI Studio or start building with the API.
3737

3838
> [!IMPORTANT]
3939
> Retrieving untrusted data using Function calling, Code Interpreter or File Search with file input, and Assistant Threads functionalities could compromise the security of your Assistant, or the application that uses the Assistant. Learn about mitigation approaches [here](https://aka.ms/oai/assistant-rai).

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -842,15 +842,15 @@ Customers must understand that while the feature improves latency, it's a trade-
842842
843843
**Customer Copyright Commitment**: Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage.
844844
845-
To enable Asynchronous Filter in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
845+
To enable Asynchronous Filter in Azure AI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
846846
847847
### Comparison of content filtering modes
848848
849849
| Compare | Streaming - Default | Streaming - Asynchronous Filter |
850850
|---|---|---|
851851
|Status |GA |Public Preview |
852852
| Eligibility |All customers |Customers approved for modified content filtering |
853-
| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure OpenAI Studio (as part of a content filtering configuration, applied at the deployment level) |
853+
| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure AI Studio (as part of a content filtering configuration, applied at the deployment level) |
854854
|Modality and availability |Text; all GPT models |Text; all GPT models |
855855
|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
856856
|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000-character increments) |

articles/ai-services/openai/concepts/gpt-with-vision.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,13 +77,13 @@ This section describes the limitations of GPT-4 Turbo with Vision.
7777

7878
- **Maximum input image size**: The maximum size for input images is restricted to 20 MB.
7979
- **Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.
80-
- **Image chat restriction**: When you upload images in Azure OpenAI Studio or the API, there is a limit of 10 images per chat call.
80+
- **Image chat restriction**: When you upload images in Azure AI Studio or the API, there is a limit of 10 images per chat call.
8181

8282
### Video support
8383

8484
- **Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
85-
- **Video file limits**: Both MP4 and MOV file types are supported. In Azure OpenAI Studio, videos must be less than 3 minutes long. When you use the API there is no such limitation.
86-
- **Prompt limits**: Video prompts only contain one video and no images. In Azure OpenAI Studio, you can clear the session to try another video or images.
85+
- **Video file limits**: Both MP4 and MOV file types are supported. In Azure AI Studio, videos must be less than 3 minutes long. When you use the API there is no such limitation.
86+
- **Prompt limits**: Video prompts only contain one video and no images. In Azure AI Studio, you can clear the session to try another video or images.
8787
- **Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.
8888
- **Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
8989

articles/ai-services/openai/concepts/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -511,7 +511,7 @@ These models can only be used with Embedding API requests.
511511

512512
## Assistants (Preview)
513513

514-
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2. You can use [global standard models](#global-standard-model-availability) if they are supported in the regions listed below.
514+
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, and Azure AI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2. You can use [global standard models](#global-standard-model-availability) if they are supported in the regions listed below.
515515

516516
| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` | `gpt-4o (2024-05-13)` | `gpt-4o-mini (2024-07-18)` |
517517
|-----|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|

articles/ai-services/openai/faq.yml

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -65,29 +65,29 @@ sections:
6565
answer: |
6666
Check out our [introduction to prompt engineering](./concepts/prompt-engineering.md). While these models are powerful, their behavior is also very sensitive to the prompts they receive from the user. This makes prompt construction an important skill to develop. After you've completed the introduction, check out our article on [system messages](./concepts/advanced-prompt-engineering.md).
6767
- question: |
68-
My guest account has been given access to an Azure OpenAI resource, but I'm unable to access that resource in the Azure OpenAI Studio. How do I enable access?
68+
My guest account has been given access to an Azure OpenAI resource, but I'm unable to access that resource in the Azure AI Studio. How do I enable access?
6969
answer: |
70-
This is expected behavior when using the default sign-in experience for the [Azure OpenAI Studio](https://oai.azure.com).
70+
This is expected behavior when using the default sign-in experience for the [Azure AI Studio](https://ai.azure.com).
7171
72-
To access Azure OpenAI Studio from a guest account that has been granted access to an Azure OpenAI resource:
72+
To access Azure AI Studio from a guest account that has been granted access to an Azure OpenAI resource:
7373
74-
1. Open a private browser session and then navigate to [https://oai.azure.com](https://oai.azure.com).
74+
1. Open a private browser session and then navigate to [https://ai.azure.com](https://ai.azure.com).
7575
2. Rather than immediately entering your guest account credentials instead select `Sign-in options`
7676
3. Now select **Sign in to an organization**
7777
4. Enter the domain name of the organization that granted your guest account access to the Azure OpenAI resource.
7878
5. Now sign-in with your guest account credentials.
7979
80-
You should now be able to access the resource via the Azure OpenAI Studio.
80+
You should now be able to access the resource via the Azure AI Studio.
8181
82-
Alternatively if you're signed into the [Azure portal](https://portal.azure.com) from the Azure OpenAI resource's Overview pane you can select **Go to Azure OpenAI Studio** to automatically sign in with the appropriate organizational context.
82+
Alternatively if you're signed into the [Azure portal](https://portal.azure.com) from the Azure OpenAI resource's Overview pane you can select **Go to Azure AI Studio** to automatically sign in with the appropriate organizational context.
8383
- question: |
8484
When I ask GPT-4 which model it's running, it tells me it's running GPT-3. Why does this happen?
8585
answer: |
8686
Azure OpenAI models (including GPT-4) being unable to correctly identify what model is running is expected behavior.
8787
8888
**Why does this happen?**
8989
90-
Ultimately, the model is performing next [token](/semantic-kernel/prompt-engineering/tokens) prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure OpenAI Studio** > **Management** > **Deployments** > and consult the model name column to confirm what model is currently associated with a given deployment name.
90+
Ultimately, the model is performing next [token](/semantic-kernel/prompt-engineering/tokens) prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure AI Studio** > **Management** > **Deployments** > and consult the model name column to confirm what model is currently associated with a given deployment name.
9191
9292
The questions, "What model are you running?" or "What is the latest model from OpenAI?" produce similar quality results to asking the model what the weather will be today. It might return the correct result, but purely by chance. On its own, the model has no real-world information other than what was part of its training/training data. In the case of GPT-4, as of August 2023 the underlying training data goes only up to September 2021. GPT-4 wasn't released until March 2023, so barring OpenAI releasing a new version with updated training data, or a new version that is fine-tuned to answer those specific questions, it's expected behavior for GPT-4 to respond that GPT-3 is the latest model release from OpenAI.
9393
@@ -163,7 +163,7 @@ sections:
163163
answer: |
164164
Consult the Azure OpenAI [model availability guide](./concepts/models.md#model-summary-table-and-region-availability) for region availability.
165165
- question: |
166-
How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio.
166+
How do I enable fine-tuning? Create a custom model is greyed out in Azure AI Studio.
167167
answer: |
168168
In order to successfully access fine-tuning, you need Cognitive Services OpenAI Contributor assigned. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information, please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
169169
- question: |
@@ -298,7 +298,7 @@ sections:
298298
- question: |
299299
Will my web app be overwritten when I deploy the app again from the Azure AI Studio?
300300
answer:
301-
Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure AI Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure OpenAI Studio without any change to the appearance or functionality.
301+
Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure AI Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure AI Studio without any change to the appearance or functionality.
302302
- name: Using your data
303303
questions:
304304
- question: |
@@ -346,7 +346,7 @@ sections:
346346
answer:
347347
You must send queries in the same language of your data. Your data can be in any of the languages supported by [Azure AI Search](/azure/search/search-language-support).
348348
- question: |
349-
If Semantic Search is enabled for my Azure AI Search resource, will it be automatically applied to Azure OpenAI on your data in the Azure OpenAI Studio?
349+
If Semantic Search is enabled for my Azure AI Search resource, will it be automatically applied to Azure OpenAI on your data in the Azure AI Studio?
350350
answer:
351351
When you select "Azure AI Search" as the data source, you can choose to apply semantic search.
352352
If you select "Azure Blob Container" or "Upload files" as the data source, you can create the index as usual. Afterwards you would reingest the data using the "Azure AI Search" option to select the same index and apply Semantic Search. You will then be ready to chat on your data with semantic search applied.

articles/ai-services/openai/how-to/batch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ The following aren't currently supported:
8686

8787
In the Studio UI the deployment type will appear as `Global-Batch`.
8888

89-
:::image type="content" source="../media/how-to/global-batch/global-batch.png" alt-text="Screenshot that shows the model deployment dialog in Azure OpenAI Studio with Global-Batch deployment type highlighted." lightbox="../media/how-to/global-batch/global-batch.png":::
89+
:::image type="content" source="../media/how-to/global-batch/global-batch.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Studio with Global-Batch deployment type highlighted." lightbox="../media/how-to/global-batch/global-batch.png":::
9090

9191
> [!TIP]
9292
> We recommend enabling **dynamic quota** for all global batch model deployments to help avoid job failures due to insufficient enqueued token quota. Dynamic quota allows your deployment to opportunistically take advantage of more quota when extra capacity is available. When dynamic quota is set to off, your deployment will only be able to process requests up to the enqueued token limit that was defined when you created the deployment.

articles/ai-services/openai/how-to/completions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Azure OpenAI Service provides a **completion endpoint** that can be used for a w
2020
> [!IMPORTANT]
2121
> Unless you have a specific use case that requires the completions endpoint, we recommend instead using the [chat completions endpoint](./chatgpt.md) which allows you to take advantage of the latest models like GPT-4o, GPT-4o mini, and GPT-4 Turbo.
2222
23-
The best way to start exploring completions is through the playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one:
23+
The best way to start exploring completions is through the playground in [Azure AI Studio](https://ai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one:
2424

2525
```console
2626
write a tagline for an ice cream shop

articles/ai-services/openai/how-to/deployment-types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ You can use the following policy to disable access to Azure OpenAI global standa
119119

120120
## Deploy models
121121

122-
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure OpenAI Studio with three deployment types highlighted." lightbox="../media/deployment-types/deploy-models-new.png":::
122+
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Studio with three deployment types highlighted." lightbox="../media/deployment-types/deploy-models-new.png":::
123123

124124
To learn about creating resources and deploying models refer to the [resource creation guide](./create-resource.md).
125125

articles/ai-services/openai/how-to/quota.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,11 +44,11 @@ The flexibility to distribute TPM globally within a subscription and region has
4444

4545
When you create a model deployment, you have the option to assign Tokens-Per-Minute (TPM) to that deployment. TPM can be modified in increments of 1,000, and will map to the TPM and RPM rate limits enforced on your deployment, as discussed above.
4646

47-
To create a new deployment from within the Azure AI Studio under **Shared Resources** select **Deployments** > **Deploy model** > **Deploy base model** > **Select Model** > **Confirm**.
47+
To create a new deployment from within the Azure AI Studio select **Deployments** > **Deploy model** > **Deploy base model** > **Select Model** > **Confirm**.
4848

4949
:::image type="content" source="../media/quota/deployment-new.png" alt-text="Screenshot of the deployment UI of Azure AI Studio" lightbox="../media/quota/deployment-new.png":::
5050

51-
Post deployment you can adjust your TPM allocation by selecting **Edit** under **Shared resources** > **Deployments** in Azure OpenAI Studio. You can also modify this selection within the new quota management experience under **Management** > **Quotas**.
51+
Post deployment you can adjust your TPM allocation by selecting and editing your model from the **Deployments** page in Azure AI Studio. You can also modify this setting from the **Management** > **Model quota** page.
5252

5353
> [!IMPORTANT]
5454
> Quotas and limits are subject to change, for the most up-date-information consult our [quotas and limits article](../quotas-limits.md).

articles/ai-services/openai/how-to/work-with-code.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ You can use Codex for a variety of tasks including:
2828

2929
## How to use completions models with code
3030

31-
Here are a few examples of using Codex that can be tested in [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
31+
Here are a few examples of using Codex that can be tested in the [Azure AI Studio](https://ai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
3232

3333
### Saying "Hello" (Python)
3434

0 commit comments

Comments
 (0)