Skip to content

Commit 78b8379

Browse files
authored
Merge pull request #2343 from MicrosoftDocs/main
1/16/2025 11:00 AM IST Publish
2 parents f3626a8 + 9d5e704 commit 78b8379

22 files changed

+82
-87
lines changed

articles/ai-services/agents/faq.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ metadata:
77
manager: nitinme
88
ms.service: azure-ai-agent-service
99
ms.topic: faq
10-
ms.date: 12/20/2024
10+
ms.date: 01/15/2025
1111
ms.author: aahi
1212
author: aahill
1313
title: Azure AI Agent Service frequently asked questions
@@ -49,7 +49,7 @@ sections:
4949
* If you've enabled the Code Interpreter tool - for example your agent calls Code Interpreter simultaneously in two different threads, this would create two Code Interpreter sessions, each of which would be charged. Each session is active by default for one hour, which means that you would only pay this fee once if your user keeps giving instructions to Code Interpreter in the same thread for up to one hour.
5050
* File search is billed based on the vector storage used.
5151
52-
For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
52+
For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
5353
- question: |
5454
Is there any additional pricing or quota for using AI Agent Service?
5555
answer: |

articles/ai-services/agents/how-to/use-your-own-resources.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-agent-service
88
ms.topic: how-to
9-
ms.date: 12/11/2024
9+
ms.date: 01/15/2025
1010
author: fosteramanda
1111
ms.author: fosteramanda
1212
ms.custom: azure-ai-agents
@@ -17,7 +17,8 @@ ms.custom: azure-ai-agents
1717
Use this article if you want to use the Azure Agent Service with resources you already have.
1818

1919
> [!NOTE]
20-
> If you use an existing AI Services / Azure OpenAI Service resource, no model will be deployed. You can deploy a model to the resource after the agent setup is complete.
20+
> * If you use an existing AI Services / Azure OpenAI Service resource, no model will be deployed. You can deploy a model to the resource after the agent setup is complete.
21+
> * Make sure your Azure OpenAI resource and Azure AI Foundry project are in the same region.
2122
2223
## Choose basic or standard agent setup
2324

articles/ai-services/document-intelligence/concept/add-on-capabilities.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jaep3347
66
manager: nitinme
77
ms.service: azure-ai-document-intelligence
88
ms.topic: conceptual
9-
ms.date: 11/19/2024
9+
ms.date: 01/15/2025
1010
ms.author: lajanuar
1111
monikerRange: '>=doc-intel-3.1.0'
1212
---
@@ -64,14 +64,14 @@ Document Intelligence supports more sophisticated and modular analysis capabilit
6464

6565
|Add-on Capability| Add-On/Free|**2024-11-30 (GA)**|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[v2.1 (GA)](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|
6666
|----------------|-----------|---|--|---|---|
67-
|Font property extraction|Add-On| ✔️| ✔️| n/a| n/a|
68-
|Formula extraction|Add-On| ✔️| ✔️| n/a| n/a|
69-
|High resolution extraction|Add-On| ✔️| ✔️| n/a| n/a|
7067
|Barcode extraction|Free| ✔️| ✔️| n/a| n/a|
7168
|Language detection|Free| ✔️| ✔️| n/a| n/a|
7269
|Key value pairs|Free| ✔️|n/a|n/a| n/a|
73-
|Query fields|Add-On*| ✔️|n/a|n/a| n/a|
74-
|Searhable pdf|Add-On**| ✔️|n/a|n/a| n/a|
70+
|Searchable PDF|Free| ✔️|n/a|n/a| n/a|
71+
|Font property extraction|**Add-On**| ✔️| ✔️| n/a| n/a|
72+
|Formula extraction|**Add-On**| ✔️| ✔️| n/a| n/a|
73+
|High resolution extraction|**Add-On**| ✔️| ✔️| n/a| n/a|
74+
|Query fields|**Add-On**| ✔️|n/a|n/a| n/a|
7575

7676
✱ Add-On - Query fields are priced differently than the other add-on features. See [pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/) for details. </br>
7777
** Add-On - Searchable pdf is available only with Read model as an add-on feature.

articles/ai-services/openai/includes/model-matrix/global-batch.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -11,24 +11,24 @@ ms.date: 01/15/2025
1111

1212
| **Region** | **gpt-4o**, **2024-05-13** | **gpt-4o**, **2024-08-06** | **gpt-4o**, **2024-11-20** | **gpt-4o-mini**, **2024-07-18** | **gpt-4**, **0613** | **gpt-4**, **turbo-2024-04-09** | **gpt-35-turbo**, **0613** | **gpt-35-turbo**, **1106** | **gpt-35-turbo**, **0125** |
1313
|:-------------------|:--------------------------:|:--------------------------:|:--------------------------:|:-------------------------------:|:-------------------:|:-------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|
14-
| australiaeast ||| - |||||||
15-
| brazilsouth ||| - |||||||
16-
| canadaeast ||| - |||||||
14+
| australiaeast ||| |||||||
15+
| brazilsouth ||| |||||||
16+
| canadaeast ||| |||||||
1717
| eastus ||||||||||
1818
| eastus2 ||||||||||
1919
| francecentral ||||||||||
20-
| germanywestcentral ||| - |||||||
21-
| japaneast ||| - |||||||
22-
| koreacentral ||| - |||||||
23-
| northcentralus ||| - |||||||
24-
| norwayeast ||| - |||||||
25-
| polandcentral ||| - |||||||
26-
| southafricanorth ||| - |||||||
20+
| germanywestcentral ||| |||||||
21+
| japaneast ||| |||||||
22+
| koreacentral ||| |||||||
23+
| northcentralus ||| |||||||
24+
| norwayeast ||| |||||||
25+
| polandcentral ||| |||||||
26+
| southafricanorth ||| |||||||
2727
| southcentralus ||||||||||
28-
| southindia ||| - |||||||
28+
| southindia ||| |||||||
2929
| swedencentral ||||||||||
30-
| switzerlandnorth ||| - |||||||
31-
| uksouth ||| - |||||||
30+
| switzerlandnorth ||| |||||||
31+
| uksouth ||| |||||||
3232
| westeurope ||||||||||
3333
| westus ||||||||||
34-
| westus3 ||||||||||
34+
| westus3 ||||||||||

articles/ai-services/speech-service/how-to-recognize-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 10/17/2024
9+
ms.date: 1/16/2025
1010
ms.author: eur
1111
ms.devlang: cpp
1212
ms.custom: devx-track-extended-java, devx-track-go, devx-track-js, devx-track-python

articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -233,6 +233,6 @@ speechConfig->SetProperty(PropertyId::Speech_SegmentationStrategy, "Semantic");
233233
234234
Some of the limitations of semantic segmentation are as follows:
235235
- You need the Speech SDK version 1.41 or later to use semantic segmentation.
236-
- Semantic segmentation is only intended for use in [continuous recognition](#continuous-recognition). This includes scenarios such as transcription and captioning. It shouldn't be used in the single recognition and dictation mode.
236+
- Semantic segmentation is only intended for use in [continuous recognition](#continuous-recognition). This includes scenarios such as dictation and captioning. It shouldn't be used in the single recognition mode or interactive scenarios.
237237
- Semantic segmentation isn't available for all languages and locales. Currently, semantic segmentation is only available for English (en) locales such as en-US, en-GB, en-IN, and en-AU.
238238
- Semantic segmentation doesn't yet support confidence scores and NBest lists. As such, we don't recommend semantic segmentation if you're using confidence scores or NBest lists.

articles/ai-services/speech-service/includes/how-to/recognize-speech/csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -347,6 +347,6 @@ speechConfig.SetProperty(PropertyId.Speech_SegmentationStrategy, "Semantic");
347347

348348
Some of the limitations of semantic segmentation are as follows:
349349
- You need the Speech SDK version 1.41 or later to use semantic segmentation.
350-
- Semantic segmentation is only intended for use in [continuous recognition](#use-continuous-recognition). This includes scenarios such as transcription and captioning. It shouldn't be used in the single recognition and dictation mode.
350+
- Semantic segmentation is only intended for use in [continuous recognition](#use-continuous-recognition). This includes scenarios such as dictation and captioning. It shouldn't be used in the single recognition mode or interactive scenarios.
351351
- Semantic segmentation isn't available for all languages and locales. Currently, semantic segmentation is only available for English (en) locales such as en-US, en-GB, en-IN, and en-AU.
352352
- Semantic segmentation doesn't yet support confidence scores and NBest lists. As such, we don't recommend semantic segmentation if you're using confidence scores or NBest lists.

articles/ai-services/speech-service/includes/how-to/recognize-speech/java.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,6 +250,6 @@ speechConfig.SetProperty(PropertyId.Speech_SegmentationStrategy, "Semantic");
250250

251251
Some of the limitations of semantic segmentation are as follows:
252252
- You need the Speech SDK version 1.41 or later to use semantic segmentation.
253-
- Semantic segmentation is only intended for use in [continuous recognition](#use-continuous-recognition). This includes scenarios such as transcription and captioning. It shouldn't be used in the single recognition and dictation mode.
253+
- Semantic segmentation is only intended for use in [continuous recognition](#use-continuous-recognition). This includes scenarios such as dictation and captioning. It shouldn't be used in the single recognition mode or interactive scenarios.
254254
- Semantic segmentation isn't available for all languages and locales. Currently, semantic segmentation is only available for English (en) locales such as en-US, en-GB, en-IN, and en-AU.
255255
- Semantic segmentation doesn't yet support confidence scores and NBest lists. As such, we don't recommend semantic segmentation if you're using confidence scores or NBest lists.

articles/ai-services/speech-service/includes/how-to/recognize-speech/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,6 @@ speech_config.set_property(speechsdk.PropertyId.Speech_SegmentationStrategy, "Se
198198

199199
Some of the limitations of semantic segmentation are as follows:
200200
- You need the Speech SDK version 1.41 or later to use semantic segmentation.
201-
- Semantic segmentation is only intended for use in [continuous recognition](#use-continuous-recognition). This includes scenarios such as transcription and captioning. It shouldn't be used in the single recognition and dictation mode.
201+
- Semantic segmentation is only intended for use in [continuous recognition](#use-continuous-recognition). This includes scenarios such as dictation and captioning. It shouldn't be used in the single recognition mode or interactive scenarios.
202202
- Semantic segmentation isn't available for all languages and locales. Currently, semantic segmentation is only available for English (en) locales such as en-US, en-GB, en-IN, and en-AU.
203203
- Semantic segmentation doesn't yet support confidence scores and NBest lists. As such, we don't recommend semantic segmentation if you're using confidence scores or NBest lists.

articles/ai-studio/reference/reference-model-inference-api.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,6 @@ The API indicates how developers can consume predictions for the following modal
7373

7474
* [Get info](reference-model-inference-info.md): Returns the information about the model deployed under the endpoint.
7575
* [Text embeddings](reference-model-inference-embeddings.md): Creates an embedding vector representing the input text.
76-
* [Text completions](reference-model-inference-completions.md): Creates a completion for the provided prompt and parameters.
7776
* [Chat completions](reference-model-inference-chat-completions.md): Creates a model response for the given chat conversation.
7877
* [Image embeddings](reference-model-inference-images-embeddings.md): Creates an embedding vector representing the input text and image.
7978

0 commit comments

Comments
 (0)