Skip to content

Commit 499b874

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into ml-upgrade
2 parents b956469 + 799256e commit 499b874

File tree

171 files changed

+1810
-1535
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

171 files changed

+1810
-1535
lines changed

.openpublishing.redirection.json

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10640,11 +10640,6 @@
1064010640
"redirect_url": "/azure/orbital/overview",
1064110641
"redirect_document_id": false
1064210642
},
10643-
{
10644-
"source_path_from_root": "/articles/load-balancer/cross-region-overview.md",
10645-
"redirect_url": "/azure/reliability/reliability-load-balancer",
10646-
"redirect_document_id": false
10647-
},
1064810643
{
1064910644
"source_path_from_root": "/articles/load-balancer/load-balancer-standard-availability-zones.md",
1065010645
"redirect_url": "/azure/reliability/reliability-load-balancer",

articles/ai-services/document-intelligence/concept-read.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ ms.author: lajanuar
3131

3232
> [!NOTE]
3333
>
34-
> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Vision v4.0 preview Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
34+
> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Image Analysis v4.0 Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
3535
>
3636
3737
Document Intelligence Read Optical Character Recognition (OCR) model runs at a higher resolution than Azure AI Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The Read model is the underlying OCR engine for other Document Intelligence prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, Health insurance card, W2 in addition to custom models.

articles/ai-services/document-intelligence/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -426,7 +426,7 @@ sections:
426426
- question: |
427427
How can I move my trained models from one environment (like beta) to another (like production)?
428428
answer: |
429-
The Copy API enables this scenario by allowing you to copy custom models from one Document Intelligence account or into others, which can exist in any supported geographical region. Follow [this document](disaster-recovery.md) for detailed instructions. The copy operation is limited to copying models within the specific cloud environment the model was trained in. For instance, copying models from the public cloud to the Azure Government clod isn't supported.
429+
The Copy API enables this scenario by allowing you to copy custom models from one Document Intelligence account or into others, which can exist in any supported geographical region. Follow [this document](disaster-recovery.md) for detailed instructions. The copy operation is limited to copying models within the specific cloud environment the model was trained in. For instance, copying models from the public cloud to the Azure Government cloud isn't supported.
430430
431431
- question: |
432432
Why was I charged for Layout when running custom training?

articles/ai-services/language-service/native-document-support/use-native-documents.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -28,15 +28,15 @@ ms.author: lajanuar
2828
2929
Azure AI Language is a cloud-based service that applies Natural Language Processing (NLP) features to text-based data. The native document support capability enables you to send API requests asynchronously, using an HTTP POST request body to send your data and HTTP GET request query string to retrieve the processed data.
3030

31-
A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities:
31+
A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing before using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities:
3232

3333
* [Personally Identifiable Information (PII)](../personally-identifiable-information/overview.md). The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. The `PiiEntityRecognition` API supports native document processing.
3434

3535
* [Document summarization](../summarization/overview.md). Document summarization uses natural language processing to generate extractive (salient sentence extraction) or abstractive (contextual word extraction) summaries for documents. Both `AbstractiveSummarization` and `ExtractiveSummarization` APIs support native document processing.
3636

3737
## Supported document formats
3838

39-
Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats:
39+
Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats:
4040

4141
|File type|File extension|Description|
4242
|---------|--------------|-----------|
@@ -69,7 +69,7 @@ A native document refers to the file format used to create the original document
6969

7070
> [!NOTE]
7171
> The cURL package is pre-installed on most Windows 10 and Windows 11 and most macOS and Linux distributions. You can check the package version with the following commands:
72-
> Windows: `curl.exe -V`.
72+
> Windows: `curl.exe -V`
7373
> macOS `curl -V`
7474
> Linux: `curl --version`
7575
@@ -78,7 +78,7 @@ A native document refers to the file format used to create the original document
7878
* [Windows](https://curl.haxx.se/windows/).
7979
* [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows).
8080

81-
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
81+
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
8282

8383
* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
8484

@@ -128,7 +128,7 @@ Your Language resource needs granted access to your storage account before it ca
128128

129129
* [**Shared access signature (SAS) tokens**](shared-access-signatures.md). User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
130130

131-
* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources
131+
* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources.
132132

133133
For this project, we authenticate access to the `source location` and `target location` URLs with Shared Access Signature (SAS) tokens appended as query strings. Each token is assigned to a specific blob (file).
134134

@@ -177,7 +177,7 @@ For this quickstart, you need a **source document** uploaded to your **source co
177177
"language": "en-US",
178178
"id": "Output-excel-file",
179179
"source": {
180-
"location": "{your-source-container-with-SAS-URL}"
180+
"location": "{your-source-blob-with-SAS-URL}"
181181
},
182182
"target": {
183183
"location": "{your-target-container-with-SAS-URL}"
@@ -189,8 +189,8 @@ For this quickstart, you need a **source document** uploaded to your **source co
189189
{
190190
"kind": "PiiEntityRecognition",
191191
"parameters":{
192-
"excludePiiCategoriesredac" : ["PersonType", "Category2", "Category3"],
193-
"redactionPolicy": "UseEntityTypeName"
192+
"excludePiiCategories" : ["PersonType", "Category2", "Category3"],
193+
"redactionPolicy": "UseRedactionCharacterWithRefId"
194194
}
195195
}
196196
]
@@ -344,7 +344,7 @@ For this project, you need a **source document** uploaded to your **source conta
344344
"documents":[
345345
{
346346
"source":{
347-
"location":"{your-source-container-SAS-URL}"
347+
"location":"{your-source-blob-SAS-URL}"
348348
},
349349
"targets":
350350
{

articles/ai-services/openai/reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
6161
- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
6262
- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
6363
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
64-
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-12-15-preview/inference.json)
64+
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
6565

6666
**Request body**
6767

articles/ai-services/speech-service/how-to-pronunciation-assessment.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -621,15 +621,15 @@ The following table summarizes which features that locales support. For more spe
621621

622622
| Phoneme alphabet | IPA | SAPI |
623623
|:-----------------|:--------|:-----|
624-
| Phoneme name | `en-US` | `en-US`, `en-GB`, `zh-CN` |
625-
| Syllable group | `en-US` | `en-US`, `en-GB` |
626-
| Spoken phoneme | `en-US` | `en-US`, `en-GB` |
624+
| Phoneme name | `en-US` | `en-US`, `zh-CN` |
625+
| Syllable group | `en-US` | `en-US`|
626+
| Spoken phoneme | `en-US` | `en-US` |
627627

628628
### Syllable groups
629629

630630
Pronunciation assessment can provide syllable-level assessment results. A word is typically pronounced syllable by syllable rather than phoneme by phoneme. Grouping in syllables is more legible and aligned with speaking habits.
631631

632-
Pronunciation assessment supports syllable groups only in `en-US` with IPA and in both `en-US` and `en-GB` with SAPI.
632+
Pronunciation assessment supports syllable groups only in `en-US` with IPA and with SAPI.
633633

634634
The following table compares example phonemes with the corresponding syllables.
635635

@@ -644,7 +644,7 @@ To request syllable-level results along with phonemes, set the granularity [conf
644644

645645
### Phoneme alphabet format
646646

647-
Pronunciation assessment supports phoneme name in `en-US` with IPA and in `en-US`, `en-GB` and `zh-CN` with SAPI.
647+
Pronunciation assessment supports phoneme name in `en-US` with IPA and in `en-US` and `zh-CN` with SAPI.
648648

649649
For locales that support phoneme name, the phoneme name is provided together with the score. Phoneme names help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
650650

@@ -722,7 +722,7 @@ pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
722722

723723
With spoken phonemes, you can get confidence scores that indicate how likely the spoken phonemes matched the expected phonemes.
724724

725-
Pronunciation assessment supports spoken phonemes in `en-US` with IPA and in both `en-US` and `en-GB` with SAPI.
725+
Pronunciation assessment supports spoken phonemes in `en-US` with IPA and with SAPI.
726726

727727
For example, to obtain the complete spoken sound for the word `Hello`, you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word `hello`, the expected IPA phonemes are `h ɛ l oʊ`. However, the actual spoken phonemes are `h ə l oʊ`. You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `ə` instead of the expected phoneme `ɛ`. The expected phoneme `ɛ` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
728728

articles/ai-services/speech-service/personal-voice-how-to-use.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,8 @@ Here's example SSML in a request for text to speech with the voice name and the
4747
You can use the SSML via the [Speech SDK](./get-started-text-to-speech.md), [REST API](rest-text-to-speech.md), or [batch synthesis API](batch-synthesis.md).
4848

4949
* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech.
50+
* When you use Speech SDK, don't set Endpoint Id, just like prebuild voice.
51+
* When you use REST API, please use prebuilt neural voices endpoint.
5052

5153
* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
5254

0 commit comments

Comments
 (0)