Skip to content

Commit 46fff80

Browse files
author
yelevin
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into yelevin/usx-cxe-open-issues
2 parents 659e91a + f9bd03b commit 46fff80

File tree

167 files changed

+4000
-685
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

167 files changed

+4000
-685
lines changed

articles/ai-services/document-intelligence/concept-accuracy-confidence.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: azure-ai-document-intelligence
88
ms.custom:
99
- ignite-2023
1010
ms.topic: conceptual
11-
ms.date: 02/29/2024
11+
ms.date: 04/16/2023
1212
ms.author: lajanuar
1313
---
1414

@@ -53,10 +53,11 @@ Field confidence indicates an estimated probability between 0 and 1 that the pre
5353
## Interpret accuracy and confidence scores for custom models
5454

5555
When interpreting the confidence score from a custom model, you should consider all the confidence scores returned from the model. Let's start with a list of all the confidence scores.
56-
1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembleds documents in the training dataset. When the document type confidence is low, this is indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is re-trained, it should be better equipped to handl that class of variations.
57-
2. **Field level confidence**: Each labled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating the confidence you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the OCR results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
58-
3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words, each word has an associated span and confidence. Spans from the custom field extracted values will match the spans of the extracted words.
59-
4. **Selection mark confidence score**: The pages array also contains an array of selection marks, each selection mark has a confidence score representing the confidence of the seletion mark and selection state detection. When a labeled field is a selection mark, the custom field selection confidence combined with the selection mark confidence is an accurate representation of the overall confidence that the field was extracted correctly.
56+
57+
1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembles documents in the training dataset. When the document type confidence is low, it's indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is retrained, it should be better equipped to handle that class of variations.
58+
2. **Field level confidence**: Each labeled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating confidence scores, you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the `OCR` results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
59+
3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words and each word has an associated span and confidence score. Spans from the custom field extracted values match the spans of the extracted words.
60+
4. **Selection mark confidence score**: The pages array also contains an array of selection marks. Each selection mark has a confidence score representing the confidence of the selection mark and selection state detection. When a labeled field has a selection mark, the custom field selection combined with the selection mark confidence is an accurate representation of overall confidence accuracy.
6061

6162
The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
6263

@@ -69,7 +70,7 @@ The following table demonstrates how to interpret both the accuracy and confiden
6970

7071
## Table, row, and cell confidence
7172

72-
With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row and cell scores:
73+
With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row, and cell scores:
7374

7475
**Q:** Is it possible to see a high confidence score for cells, but a low confidence score for the row?<br>
7576

articles/ai-services/document-intelligence/how-to-guides/includes/v4-0/javascript-sdk.md

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: laujan
55
manager: nitinme
66
ms.service: azure-ai-document-intelligence
77
ms.topic: include
8-
ms.date: 03/28/2024
8+
ms.date: 04/16/2024
99
ms.author: lajanuar
1010
ms.custom:
1111
- devx-track-csharp
@@ -106,7 +106,8 @@ Open the `index.js` file in Visual Studio Code or your favorite IDE and select o
106106
## Use the Read model
107107

108108
```javascript
109-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
109+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
110+
const { AzureKeyCredential } = require("@azure/core-auth");
110111

111112
//use your `key` and `endpoint` environment variables
112113
const key = process.env['DI_KEY'];
@@ -202,7 +203,8 @@ Visit the Azure samples repository on GitHub and view the [`read` model output](
202203
## Use the Layout model
203204

204205
```javascript
205-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
206+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
207+
const { AzureKeyCredential } = require("@azure/core-auth");
206208

207209
//use your `key` and `endpoint` environment variables
208210
const key = process.env['DI_KEY'];
@@ -272,7 +274,8 @@ Visit the Azure samples repository on GitHub and view the [layout model output](
272274
## Use the General document model
273275

274276
```javascript
275-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
277+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
278+
const { AzureKeyCredential } = require("@azure/core-auth");
276279

277280
//use your `key` and `endpoint` environment variables
278281
const key = process.env['DI_KEY'];
@@ -318,7 +321,8 @@ Visit the Azure samples repository on GitHub and view the [general document mode
318321
## Use the W-2 tax model
319322

320323
```javascript
321-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
324+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
325+
const { AzureKeyCredential } = require("@azure/core-auth");
322326

323327
//use your `key` and `endpoint` environment variables
324328
const key = process.env['DI_KEY'];
@@ -397,7 +401,8 @@ Visit the Azure samples repository on GitHub and view the [W-2 tax model output]
397401
## Use the Invoice model
398402

399403
```javascript
400-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
404+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
405+
const { AzureKeyCredential } = require("@azure/core-auth");
401406

402407
//use your `key` and `endpoint` environment variables
403408
const key = process.env['DI_KEY'];
@@ -459,7 +464,8 @@ Visit the Azure samples repository on GitHub and view the [invoice model output]
459464
## Use the Receipt model
460465
461466
```javascript
462-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
467+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
468+
const { AzureKeyCredential } = require("@azure/core-auth");
463469

464470
//use your `key` and `endpoint` environment variables
465471
const key = process.env['DI_KEY'];
@@ -518,7 +524,8 @@ Visit the Azure samples repository on GitHub and view the [receipt model output]
518524
## Use the ID document model
519525
520526
```javascript
521-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
527+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
528+
const { AzureKeyCredential } = require("@azure/core-auth");
522529

523530
//use your `key` and `endpoint` environment variables
524531
const key = process.env['DI_KEY'];

articles/ai-services/document-intelligence/quickstarts/includes/javascript-sdk.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ In this quickstart, use the following features to analyze and extract data and v
7373
4. Install the `ai-document-intelligence` client library and `azure/identity` npm packages:
7474

7575
```console
76-
npm i @azure-rest/[email protected]
76+
npm i @azure-rest/[email protected] @azure/identity
7777

7878
```
7979

@@ -146,10 +146,11 @@ Extract text, selection marks, text styles, table structures, and bounding regio
146146
:::moniker range="doc-intel-4.0.0"
147147

148148
```javascript
149-
const { AzureKeyCredential, DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
149+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
150+
const { AzureKeyCredential } = require("@azure/core-auth");
150151

151152
// set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
152-
const key = "<your-key>";
153+
const key = "<your-key";
153154
const endpoint = "<your-endpoint>";
154155

155156
// sample document
@@ -311,7 +312,8 @@ In this example, we analyze an invoice using the **prebuilt-invoice** model.
311312

312313
```javascript
313314

314-
const { AzureKeyCredential, DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
315+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
316+
const { AzureKeyCredential } = require("@azure/core-auth");
315317

316318
// set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
317319
const key = "<your-key>";

articles/ai-services/immersive-reader/overview.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,10 @@ With Immersive Reader, you can break words into syllables to improve readability
6969

7070
Immersive Reader is a standalone web application. When it's invoked, the Immersive Reader client library displays on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
7171

72+
## Data privacy for Immersive reader
73+
74+
Immersive reader doesn't store any customer data.
75+
7276
## Next step
7377

7478
The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:

articles/ai-services/openai/how-to/content-filters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Learn how to use content filters (preview) with Azure OpenAI Servic
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 03/29/2024
9+
ms.date: 04/16/2024
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -15,7 +15,7 @@ recommendations: false
1515
# How to configure content filters with Azure OpenAI Service
1616

1717
> [!NOTE]
18-
> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
18+
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
1919
2020
The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
2121

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
title: Deploy your custom text to speech avatar model as an endpoint - Speech service
3+
titleSuffix: Azure AI services
4+
description: Learn about how to deploy your custom text to speech avatar model as an endpoint.
5+
author: sally-baolian
6+
manager: nitinme
7+
ms.service: azure-ai-speech
8+
ms.topic: how-to
9+
ms.date: 4/15/2024
10+
ms.author: v-baolianzou
11+
---
12+
13+
# Deploy your custom text to speech avatar model as an endpoint
14+
15+
You must deploy the custom avatar to an endpoint before you can use it. Once your custom text to speech avatar model is successfully trained through our manual process, we will notify you. Then you can deploy it to a custom avatar endpoint. You can create up to 10 custom avatar endpoints for each standard (S0) Speech resource.
16+
17+
After you deploy your custom avatar, it's available to use in Speech Studio or through API:
18+
19+
- The avatar appears in the avatar list of text to speech avatar on [Speech Studio](https://speech.microsoft.com/portal/talkingavatar).
20+
- The avatar appears in the avatar list of live chat avatar on [Speech Studio](https://speech.microsoft.com/portal/livechat).
21+
- You can call the avatar from the API by specifying the avatar model name.
22+
23+
## Add a deployment endpoint
24+
25+
To create a custom avatar endpoint, follow these steps:
26+
27+
1. Sign in to [Speech Studio](https://speech.microsoft.com/portal).
28+
1. Navigate to **Custom Avatar** > Your project name > **Train model**.
29+
1. All available models are listed on the **Train model** page. Select a model link to view more information, such as the created date and a preview image of the custom avatar.
30+
1. Select a model that you would like to deploy, then select the **Deploy model** button above the list.
31+
1. Confirm the deployment to create your endpoint.
32+
33+
Once your model is successfully deployed as an endpoint, you can select the endpoint link on the **Deploy model** page. There, you'll find a link to the text to speech avatar portal on Speech Studio, where you can try and create videos with your custom avatar using text input.
34+
35+
## Remove a deployment endpoint
36+
37+
To remove a deployment endpoint, follow these steps:
38+
39+
1. Sign in to [Speech Studio](https://speech.microsoft.com/portal).
40+
1. Navigate to **Custom Avatar** > Your project name > **Train model**.
41+
1. All available models are listed on the **Train model** page. Select a model link to view more information, such as the created date and a preview image of the custom avatar.
42+
1. Select a model on the **Train model** page. If it's in "Succeeded" status, it means it's in hosting status. You can select the **Delete** button and confirm the deletion to remove the hosting.
43+
44+
## Use your custom neural voice
45+
46+
If you're also creating a custom neural voice for the actor, the avatar can be highly realistic. For more information, see [What is custom text to speech avatar](./what-is-custom-text-to-speech-avatar.md).
47+
48+
[Custom neural voice](../custom-neural-voice.md) and [custom text to speech avatar](what-is-custom-text-to-speech-avatar.md) are separate features. You can use them independently or together.
49+
50+
If you've built a custom neural voice (CNV) and would like to use it together with the custom avatar, pay attention to the following points:
51+
52+
- Ensure that the CNV endpoint is created in the same Speech resource as the custom avatar endpoint. You can see the CNV voice option in the voices list of the [avatar content generation page](https://speech.microsoft.com/portal/talkingavatar) and [live chat voice settings](https://speech.microsoft.com/portal/livechat).
53+
- If you're using the batch synthesis for avatar API, add the "customVoices" property to associate the deployment ID of the CNV model with the voice name in the request. For more information, refer to the [Text to speech properties](batch-synthesis-avatar-properties.md#text-to-speech-properties).
54+
- If you're using real-time synthesis for avatar API, refer to our sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar) to set the custom neural voice.
55+
- If your custom neural voice endpoint is in a different Speech resource from the custom avatar endpoint, refer to [Train your professional voice model](../professional-voice-train-voice.md#copy-your-voice-model-to-another-project) to copy the CNV model to the same Speech resource as the custom avatar endpoint.
56+
57+
## Next steps
58+
59+
- Learn more about custom text to speech avatar in the [overview](what-is-custom-text-to-speech-avatar.md).

articles/ai-services/speech-service/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -224,6 +224,9 @@ items:
224224
- name: How to record video samples
225225
href: text-to-speech-avatar/custom-avatar-record-video-samples.md
226226
displayName: avatar
227+
- name: Deploy your custom text to speech avatar model as an endpoint
228+
href: text-to-speech-avatar/custom-avatar-endpoint.md
229+
displayName: avatar
227230
- name: Audio Content Creation
228231
href: how-to-audio-content-creation.md
229232
displayName: acc

articles/ai-services/translator/containers/configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ If you need to configure an HTTP proxy for making outbound requests, use these t
7474

7575
| Name | Data type | Description |
7676
|--|--|--|
77-
|HTTPS_PROXY|string|The proxy to use, for example, `https://proxy:8888`<br>`<proxy-url>`|
77+
|HTTPS_PROXY|string|The proxy URL, for example, `https://proxy:8888`|
7878

7979
```bash
8080
docker run --rm -it -p 5000:5000 \
@@ -84,7 +84,7 @@ docker run --rm -it -p 5000:5000 \
8484
Eula=accept \
8585
Billing=<endpoint> \
8686
ApiKey=<api-key> \
87-
HTTPS_PROXY=<proxy-url> \
87+
HTTPS_PROXY=<proxy-url>
8888
```
8989

9090
## Logging settings

0 commit comments

Comments
 (0)