You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/how-to-guides/disaster-recovery.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ ms.author: lajanuar
33
33
34
34
::: moniker range=">= doc-intel-2.1.0"
35
35
36
-
When you create a Document Intelligence resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Document Intelligence resources in different regions and the ability to sync custom models and classifiers across regions.
36
+
When you create a Document Intelligence resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Document Intelligence resources in different regions and the ability to sync custom models and classifiers across regions.
37
37
38
38
The Copy API enables this scenario by allowing you to copy custom models and classifiers from one Document Intelligence account or into others, which can exist in any supported geographical region. This guide shows you how to use the Copy REST API with cURL for custom models. You can also use an HTTP request service to issue the requests.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/release-notes/release-notes-sdk.md
+19-10Lines changed: 19 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,25 @@ ms.author: eur
8
8
9
9
> [!IMPORTANT]
10
10
> Content assessment (preview) via the Speech SDK is being retired in July 2025. Instead, you can use Azure OpenAI models to get content assessment results as described in the [content assessment documentation](../../how-to-pronunciation-assessment.md#content-assessment).
11
+
12
+
### Speech SDK 1.45: 2025-July release
13
+
14
+
#### New features:
15
+
* Added support for setting the phrase list grammar weight. (Currently only effects embedded scenarios)
16
+
* Added more specific file opening error codes.
17
+
* Updated Unicode path support so that SDK Windows DLLs can be located under non-ASCII paths.
18
+
* Updated descriptions of segmentation strategy properties to align with the service logic.
19
+
*[C#, Java] Added support for authentication using ApiKeyCredential.
20
+
21
+
#### Bug fixes
22
+
* Fixed the Microsoft Audio Stack (MAS) initialization error about microphone geometry in certain regions.
23
+
* Fixed profanity settings not working in speech translation (https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/2856).
24
+
* Fixed a crash in intent recognition pattern matching with Japanese language.
25
+
* Fixed custom domain resolution not working with Node.js v22 or newer.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/release-notes/release-notes-stt.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,21 +7,27 @@ ms.author: eur
7
7
ms.custom: references_regions
8
8
---
9
9
10
+
### July 2025 release
11
+
12
+
#### Improved speech to text models
13
+
14
+
The English models (all `en-*` models except for `en-IN`) were updated to incorporate a new VAD (voice activity detector) which helps reduce the latency by 100 ms or more. It can affect the accuracy and silence segmentation both positively and negatively, with the aim of reducing latency. Further language expansion is coming in the next few months.
15
+
10
16
### June 2025 release
11
17
12
18
#### Improved pronunciation assessment model
13
19
14
-
We've rolled out significant upgrades to the pronunciation assessment models for `ta-IN` and `ms-MY`. You'll see a noticeable jump in Pearson Correlation Coefficients (PCC), which means more precise and dependable evaluations.
20
+
We rolled out significant upgrades to the pronunciation assessment models for `ta-IN` and `ms-MY`. You're seeing a noticeable jump in Pearson Correlation Coefficients (PCC), which means more precise and dependable evaluations.
15
21
16
22
These updated models are ready to use through the API and the Azure AI Foundry playground, just like before.
17
23
18
24
#### Improved speech to text models
19
-
Accuracy of speech to text models in [fast transcription](../../fast-transcription-create.md) for `de-DE`, `en-US`, `en-GB`, `es-ES`, `es-MX`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, and `zh-CN` locales are improved by 10%-25% percent respectively, particularly with improved readaibility and recognition on entities.
25
+
Accuracy of speech to text models in [fast transcription](../../fast-transcription-create.md) for `de-DE`, `en-US`, `en-GB`, `es-ES`, `es-MX`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, and `zh-CN` locales improving by 10%-25% percent respectively, particularly with improved readability and recognition on entities.
20
26
21
27
### May 2025 release
22
28
23
29
#### Improved speech to text models
24
-
Accuracy of speech to text models for `ta-IN`, `te-IN`, `en-IN`, and `hu-HU` locales are improved by 5-10 percent respectively. We also approximate a 20x reduction in ghost words for the `ta-IN` and `te-IN` models.
30
+
Accuracy of speech to text models for `ta-IN`, `te-IN`, `en-IN`, and `hu-HU` locales improving by 5-10 percent respectively. We also approximate a 20x reduction in ghost words for the `ta-IN` and `te-IN` models.
25
31
26
32
#### Fast transcription API - Multi-lingual speech transcription
Copy file name to clipboardExpand all lines: articles/search/chat-completion-skill-example-usage.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,7 +117,7 @@ This section supplements the [skill reference](cognitive-search-defining-skillse
117
117
Once the basic framework of your skillset is created and Azure AI services is configured, you can focus on each individual image skill, defining inputs and source context, and mapping outputs to fields in either an index or knowledge store.
118
118
119
119
> [!NOTE]
120
-
> For an example skillset that combines image processing with downstream natural language processing, see [REST Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md). It shows how to feed skill imaging output into entity recognition and key phrase extraction.
120
+
> For an example skillset that combines image processing with downstream natural language processing, see [REST Tutorial: Use REST and AI to generate searchable content from Azure blobs](tutorial-skillset.md). It shows how to feed skill imaging output into entity recognition and key phrase extraction.
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-concept-image-scenarios.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,7 +166,7 @@ This section supplements the [skill reference](cognitive-search-predefined-skill
166
166
Once the basic framework of your skillset is created and Azure AI services is configured, you can focus on each individual image skill, defining inputs and source context, and mapping outputs to fields in either an index or knowledge store.
167
167
168
168
> [!NOTE]
169
-
> For an example skillset that combines image processing with downstream natural language processing, see [REST Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md). It shows how to feed skill imaging output into entity recognition and key phrase extraction.
169
+
> For an example skillset that combines image processing with downstream natural language processing, see [REST Tutorial: Use REST and AI to generate searchable content from Azure blobs](tutorial-skillset.md). It shows how to feed skill imaging output into entity recognition and key phrase extraction.
0 commit comments