Skip to content

Commit dee5586

Browse files
committed
Merge branch 'main' into release-preview-llama3
2 parents 3b2cd61 + 92987d9 commit dee5586

File tree

285 files changed

+5175
-1195
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

285 files changed

+5175
-1195
lines changed

articles/advisor/advisor-how-to-calculate-total-cost-savings.md

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,15 @@
11
---
2-
title: Export cost savings in Azure Advisor
2+
title: Calculate cost savings in Azure Advisor
33
ms.topic: article
44
ms.date: 02/06/2024
55
description: Export cost savings in Azure Advisor and calculate the aggregated potential yearly savings by using the cost savings amount for each recommendation.
66
---
77

8-
# Export cost savings
8+
# Calculate cost savings
9+
10+
This article provides guidance on how to calculate total cost savings in Azure Advisor.
11+
12+
## Export cost savings for recommendations
913

1014
To calculate aggregated potential yearly savings, follow these steps:
1115

@@ -21,5 +25,14 @@ The Advisor **Overview** page opens.
2125
[![Screenshot of the Azure Advisor cost recommendations page that shows download option.](./media/advisor-how-to-calculate-total-cost-savings.png)](./media/advisor-how-to-calculate-total-cost-savings.png#lightbox)
2226

2327
> [!NOTE]
24-
> Recommendations show savings individually, and may overlap with the savings shown in other recommendations, for example – you can only benefit from savings plans for compute or reservations for virtual machines, but not from both.
28+
> Different types of cost savings recommendations are generated using overlapping datasets (for example, VM rightsizing/shutdown, VM reservations and savings plan recommendations all consider on-demand VM usage). As a result, resource changes (e.g., VM shutdowns) or reservation/savings plan purchases will impact on-demand usage, and the resulting recommendations and associated savings forecast.
29+
30+
## Understand cost savings
31+
32+
Azure Advisor provides recommendations for resizing/shutting down underutilized resources, purchasing compute reserved instances, and savings plans for compute.
33+
34+
These recommendations contain one or more calls-to-action and forecasted savings from following the recommendations. Recommendations should be followed in a specific order: rightsizing/shutdown, followed by reservation purchases, and finally, the savings plan purchase. This sequence allows each step to impact the subsequent ones positively.
35+
36+
For example, rightsizing or shutting down resources reduces on-demand costs immediately. This change in your usage pattern essentially invalidates your existing reservation and savings plan recommendations, as they were based on your pre-rightsizing usage and costs. Updated reservation and savings plan recommendations (and their forecasted savings) should appear within three days.
2537

38+
The forecasted savings from reservations and savings plans are based on actual rates and usage, while the forecasted savings from rightsizing/shutdown are based on retail rates. The actual savings may vary depending on the usage patterns and rates. Assuming there are no material changes to your usage patterns, your actual savings from reservations and savings plan should be in line with the forecasts. Savings from rightsizing/shutdown vary based on your actual rates. This is important if you intend to track cost savings forecasts from Azure Advisor.

articles/advisor/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@
7474
href: advisor-azure-resource-graph.md
7575
- name: Consume Advisor score
7676
href: azure-advisor-score.md
77-
- name: Export cost savings
77+
- name: Calculate total cost savings
7878
href: advisor-how-to-calculate-total-cost-savings.md
7979
- name: Reference
8080
items:

articles/ai-services/document-intelligence/concept-accuracy-confidence.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: azure-ai-document-intelligence
88
ms.custom:
99
- ignite-2023
1010
ms.topic: conceptual
11-
ms.date: 02/29/2024
11+
ms.date: 04/16/2023
1212
ms.author: lajanuar
1313
---
1414

@@ -53,10 +53,11 @@ Field confidence indicates an estimated probability between 0 and 1 that the pre
5353
## Interpret accuracy and confidence scores for custom models
5454

5555
When interpreting the confidence score from a custom model, you should consider all the confidence scores returned from the model. Let's start with a list of all the confidence scores.
56-
1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembleds documents in the training dataset. When the document type confidence is low, this is indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is re-trained, it should be better equipped to handl that class of variations.
57-
2. **Field level confidence**: Each labled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating the confidence you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the OCR results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
58-
3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words, each word has an associated span and confidence. Spans from the custom field extracted values will match the spans of the extracted words.
59-
4. **Selection mark confidence score**: The pages array also contains an array of selection marks, each selection mark has a confidence score representing the confidence of the seletion mark and selection state detection. When a labeled field is a selection mark, the custom field selection confidence combined with the selection mark confidence is an accurate representation of the overall confidence that the field was extracted correctly.
56+
57+
1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembles documents in the training dataset. When the document type confidence is low, it's indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is retrained, it should be better equipped to handle that class of variations.
58+
2. **Field level confidence**: Each labeled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating confidence scores, you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the `OCR` results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
59+
3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words and each word has an associated span and confidence score. Spans from the custom field extracted values match the spans of the extracted words.
60+
4. **Selection mark confidence score**: The pages array also contains an array of selection marks. Each selection mark has a confidence score representing the confidence of the selection mark and selection state detection. When a labeled field has a selection mark, the custom field selection combined with the selection mark confidence is an accurate representation of overall confidence accuracy.
6061

6162
The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
6263

@@ -69,7 +70,7 @@ The following table demonstrates how to interpret both the accuracy and confiden
6970

7071
## Table, row, and cell confidence
7172

72-
With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row and cell scores:
73+
With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row, and cell scores:
7374

7475
**Q:** Is it possible to see a high confidence score for cells, but a low confidence score for the row?<br>
7576

articles/ai-services/immersive-reader/overview.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,10 @@ With Immersive Reader, you can break words into syllables to improve readability
6969

7070
Immersive Reader is a standalone web application. When it's invoked, the Immersive Reader client library displays on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
7171

72+
## Data privacy for Immersive reader
73+
74+
Immersive reader doesn't store any customer data.
75+
7276
## Next step
7377

7478
The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:

articles/ai-services/openai/how-to/content-filters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Learn how to use content filters (preview) with Azure OpenAI Servic
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 03/29/2024
9+
ms.date: 04/16/2024
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -15,7 +15,7 @@ recommendations: false
1515
# How to configure content filters with Azure OpenAI Service
1616

1717
> [!NOTE]
18-
> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
18+
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
1919
2020
The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
2121

articles/ai-services/speech-service/includes/language-support/pronunciation-assessment.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,14 @@ ms.author: eur
1212
|Arabic (Saudi Arabia)|`ar-SA` |
1313
|Chinese (Cantonese, Traditional)|`zh-HK`<sup>1</sup>|
1414
|Chinese (Mandarin, Simplified)|`zh-CN`|
15-
|Dutch (Netherlands)|`nl-NL`<sup>1</sup>|
15+
|Chinese (Taiwanese Mandarin, Traditional)|`zh-TW`<sup>1</sup>|
16+
|Dutch (Netherlands)|`nl-NL`|
1617
|English (Australia)|`en-AU`|
1718
|English (Canada)|`en-CA` |
1819
|English (India)|`en-IN` |
1920
|English (United Kingdom)|`en-GB`|
2021
|English (United States)|`en-US`|
22+
|Finnish (Finland)|`fi-FI`<sup>1</sup>|
2123
|French (Canada)|`fr-CA`|
2224
|French (France)|`fr-FR`|
2325
|German (Germany)|`de-DE`|
@@ -27,6 +29,7 @@ ms.author: eur
2729
|Korean (Korea)|`ko-KR`|
2830
|Malay (Malaysia)|`ms-MY`|
2931
|Norwegian Bokmål (Norway)|`nb-NO`|
32+
|Polish (Poland)|`pl-PL`<sup>1</sup>|
3033
|Portuguese (Brazil)|`pt-BR`|
3134
|Portuguese (Portugal)|`pt-PT`<sup>1</sup>|
3235
|Russian (Russia)|`ru-RU`|

articles/ai-services/speech-service/language-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ With the cross-lingual feature, you can transfer your custom neural voice model
118118

119119
# [Pronunciation assessment](#tab/pronunciation-assessment)
120120

121-
The table in this section summarizes the 27 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 26 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
121+
The table in this section summarizes the 30 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 29 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
122122

123123
[!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)]
124124

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
title: Deploy your custom text to speech avatar model as an endpoint - Speech service
3+
titleSuffix: Azure AI services
4+
description: Learn about how to deploy your custom text to speech avatar model as an endpoint.
5+
author: sally-baolian
6+
manager: nitinme
7+
ms.service: azure-ai-speech
8+
ms.topic: how-to
9+
ms.date: 4/15/2024
10+
ms.author: v-baolianzou
11+
---
12+
13+
# Deploy your custom text to speech avatar model as an endpoint
14+
15+
You must deploy the custom avatar to an endpoint before you can use it. Once your custom text to speech avatar model is successfully trained through our manual process, we will notify you. Then you can deploy it to a custom avatar endpoint. You can create up to 10 custom avatar endpoints for each standard (S0) Speech resource.
16+
17+
After you deploy your custom avatar, it's available to use in Speech Studio or through API:
18+
19+
- The avatar appears in the avatar list of text to speech avatar on [Speech Studio](https://speech.microsoft.com/portal/talkingavatar).
20+
- The avatar appears in the avatar list of live chat avatar on [Speech Studio](https://speech.microsoft.com/portal/livechat).
21+
- You can call the avatar from the API by specifying the avatar model name.
22+
23+
## Add a deployment endpoint
24+
25+
To create a custom avatar endpoint, follow these steps:
26+
27+
1. Sign in to [Speech Studio](https://speech.microsoft.com/portal).
28+
1. Navigate to **Custom Avatar** > Your project name > **Train model**.
29+
1. All available models are listed on the **Train model** page. Select a model link to view more information, such as the created date and a preview image of the custom avatar.
30+
1. Select a model that you would like to deploy, then select the **Deploy model** button above the list.
31+
1. Confirm the deployment to create your endpoint.
32+
33+
Once your model is successfully deployed as an endpoint, you can select the endpoint link on the **Deploy model** page. There, you'll find a link to the text to speech avatar portal on Speech Studio, where you can try and create videos with your custom avatar using text input.
34+
35+
## Remove a deployment endpoint
36+
37+
To remove a deployment endpoint, follow these steps:
38+
39+
1. Sign in to [Speech Studio](https://speech.microsoft.com/portal).
40+
1. Navigate to **Custom Avatar** > Your project name > **Train model**.
41+
1. All available models are listed on the **Train model** page. Select a model link to view more information, such as the created date and a preview image of the custom avatar.
42+
1. Select a model on the **Train model** page. If it's in "Succeeded" status, it means it's in hosting status. You can select the **Delete** button and confirm the deletion to remove the hosting.
43+
44+
## Use your custom neural voice
45+
46+
If you're also creating a custom neural voice for the actor, the avatar can be highly realistic. For more information, see [What is custom text to speech avatar](./what-is-custom-text-to-speech-avatar.md).
47+
48+
[Custom neural voice](../custom-neural-voice.md) and [custom text to speech avatar](what-is-custom-text-to-speech-avatar.md) are separate features. You can use them independently or together.
49+
50+
If you've built a custom neural voice (CNV) and would like to use it together with the custom avatar, pay attention to the following points:
51+
52+
- Ensure that the CNV endpoint is created in the same Speech resource as the custom avatar endpoint. You can see the CNV voice option in the voices list of the [avatar content generation page](https://speech.microsoft.com/portal/talkingavatar) and [live chat voice settings](https://speech.microsoft.com/portal/livechat).
53+
- If you're using the batch synthesis for avatar API, add the "customVoices" property to associate the deployment ID of the CNV model with the voice name in the request. For more information, refer to the [Text to speech properties](batch-synthesis-avatar-properties.md#text-to-speech-properties).
54+
- If you're using real-time synthesis for avatar API, refer to our sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar) to set the custom neural voice.
55+
- If your custom neural voice endpoint is in a different Speech resource from the custom avatar endpoint, refer to [Train your professional voice model](../professional-voice-train-voice.md#copy-your-voice-model-to-another-project) to copy the CNV model to the same Speech resource as the custom avatar endpoint.
56+
57+
## Next steps
58+
59+
- Learn more about custom text to speech avatar in the [overview](what-is-custom-text-to-speech-avatar.md).

articles/ai-services/speech-service/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -224,6 +224,9 @@ items:
224224
- name: How to record video samples
225225
href: text-to-speech-avatar/custom-avatar-record-video-samples.md
226226
displayName: avatar
227+
- name: Deploy your custom text to speech avatar model as an endpoint
228+
href: text-to-speech-avatar/custom-avatar-endpoint.md
229+
displayName: avatar
227230
- name: Audio Content Creation
228231
href: how-to-audio-content-creation.md
229232
displayName: acc

articles/ai-services/translator/containers/configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ If you need to configure an HTTP proxy for making outbound requests, use these t
7474

7575
| Name | Data type | Description |
7676
|--|--|--|
77-
|HTTPS_PROXY|string|The proxy to use, for example, `https://proxy:8888`<br>`<proxy-url>`|
77+
|HTTPS_PROXY|string|The proxy URL, for example, `https://proxy:8888`|
7878

7979
```bash
8080
docker run --rm -it -p 5000:5000 \
@@ -84,7 +84,7 @@ docker run --rm -it -p 5000:5000 \
8484
Eula=accept \
8585
Billing=<endpoint> \
8686
ApiKey=<api-key> \
87-
HTTPS_PROXY=<proxy-url> \
87+
HTTPS_PROXY=<proxy-url>
8888
```
8989

9090
## Logging settings

0 commit comments

Comments
 (0)