Skip to content

Commit 81377f7

Browse files
authored
Merge pull request #1277 from MicrosoftDocs/main
11/5/2024 AM Publish
2 parents 17b8239 + 326faea commit 81377f7

File tree

5 files changed

+14
-6
lines changed

5 files changed

+14
-6
lines changed

articles/ai-services/document-intelligence/quickstarts/includes/csharp-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ In this quickstart, use the following features to analyze and extract data and v
106106

107107
:::image type="content" source="../../media/quickstarts/select-doc-intel-nuget-package.png" alt-text="Screenshot of select NuGet prerelease package window in Visual Studio.":::
108108

109-
1. Select the **Browse** tab and type *Azure.AI.FormRecognizer*.
109+
1. Select the **Browse** tab and type *Azure.AI.DocumentIntelligence*.
110110

111111
1. Select the **`Include prerelease`** checkbox.
112112

articles/ai-services/openai/concepts/provisioned-throughput.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,13 +41,14 @@ An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model.
4141
## How much throughput per PTU you get for each model
4242
The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in the minute. Generating output tokens requires more processing than input tokens and so the more output tokens generated the lower your overall TPM. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload shape.
4343

44-
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the `gpt-4o` and `gpt-4o-mini` models
44+
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the `gpt-4o` and `gpt-4o-mini` models. The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page].(https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
4545

46-
| | **gpt-4o**, **2024-05-13** & **gpt-4o**, **2024-08-06** | **gpt-4o-mini**, **2024-07-18** |
46+
| | **gpt-4o**, **2024-05-13** & **gpt-4o**, **2024-08-06** | **gpt-4o-mini**, **2024-07-18** |
4747
| --- | --- | --- |
4848
| Deployable Increments | 50 | 25|
4949
| Input TPM per PTU | 2,500 | 37,000 |
50-
| Output TPM per PTU | 833 | 12,333 |
50+
| Output TPM per PTU| 833|12,333|
51+
| Latency Target Value |25 Tokens Per Second* |33 Tokens Per Second*|
5152

5253
For a full list see the [AOAI Studio calculator](https://oai.azure.com/portal/calculator).
5354

articles/ai-studio/includes/create-hub.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,17 +10,24 @@ ms.date: 5/21/2024
1010
ms.custom: include, build-2024
1111
---
1212

13+
> [!NOTE]
14+
> A hub in Azure AI Studio is a one-stop shop where you manage everything your AI project needs, like security and resources, so you can develop and test faster. To learn more about how hubs can help you, see the [Hubs and projects overview](/azure/ai-studio/concepts/ai-resources) article.
15+
1316
To create a hub in [Azure AI Studio](https://ai.azure.com), follow these steps:
1417

1518
1. Go to the **Home** page in [Azure AI Studio](https://ai.azure.com) and sign in with your Azure account.
16-
1. Select **All hubs** from the left pane and then select **+ New hub**.
19+
1. Select **All resources** on the left pane. If you cannot see this option, in the top bar select **All resources & projects**. Then select **+ New hub**.
1720

1821
:::image type="content" source="../media/how-to/hubs/hub-new.png" alt-text="Screenshot of the button to create a new hub." lightbox="../media/how-to/hubs/hub-new.png":::
1922

20-
1. In the **Create a new hub** dialog, enter a name for your hub (such as *contoso-hub*) and select **Next**. Leave the default **Connect Azure AI Services** option selected. A new AI services connection is created for the hub.
23+
1. In the **Create a new hub** dialog, enter a name for your hub (such as *contoso-hub*). If you don't have a resource group, a new **Resource group** will be created linked to the **Subscription** provided. Leave the default **Connect Azure AI Services** option selected.
24+
1. Select **Next**. If you didn't reuse an existing resource group, a new resource group (*rg-contoso*) is created. Also an Azure AI service (*ai-contoso-hub*) is created for the hub.
2125

2226
:::image type="content" source="../media/how-to/hubs/hub-new-connect-services.png" alt-text="Screenshot of the dialog to connect services while creating a new hub." lightbox="../media/how-to/hubs/hub-new-connect-services.png":::
2327

28+
> [!NOTE]
29+
> If you don't see (new) before the **Resource group** and **Connect Azure AI Services** entries then an existing resource is being used. For the purposes of this tutorial, create a seperate entity via **Create new resource group** and **Create new AI Services**. This will allow you to prevent any unexpected charges by deleting the entities after the tutorial.
30+
2431
1. Review the information and select **Create**.
2532

2633
:::image type="content" source="../media/how-to/hubs/hub-new-review-create.png" alt-text="Screenshot of the dialog to review the settings for the new hub." lightbox="../media/how-to/hubs/hub-new-review-create.png":::
-159 KB
Loading
-527 KB
Loading

0 commit comments

Comments
 (0)