Skip to content

Commit 6863762

Browse files
Merge pull request #6863 from MicrosoftDocs/main
Auto Publish – main to live - 2025-08-30 05:01 UTC
2 parents d642c49 + 04e2868 commit 6863762

File tree

4 files changed

+42
-40
lines changed

4 files changed

+42
-40
lines changed

articles/ai-foundry/concepts/ai-red-teaming-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure AI Foundry
44
description: This article provides conceptual overview of the AI Red Teaming Agent.
55
ms.service: azure-ai-foundry
66
ms.topic: how-to
7-
ms.date: 04/04/2025
7+
ms.date: 08/29/2025
88
ms.reviewer: minthigpen
99
ms.author: lagayhar
1010
author: lgayhardt

articles/ai-foundry/how-to/develop/trace-application.md

Lines changed: 23 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@ titleSuffix: Azure AI Foundry
44
description: Learn how to trace applications that use OpenAI SDK in Azure AI Foundry
55
author: lgayhardt
66
ms.author: lagayhar
7-
ms.reviewer: amibp
8-
ms.date: 05/19/2025
7+
ms.reviewer: ychen
8+
ms.date: 08/29/2025
99
ms.service: azure-ai-foundry
1010
ms.topic: how-to
1111
---
1212

1313
# Trace AI applications using OpenAI SDK
1414

15-
Tracing provides deep visibility into execution of your application by capturing detailed telemetry at each execution step. Such helps diagnose issues and enhance performance by identifying problems such as inaccurate tool calls, misleading prompts, high latency, low-quality evaluation scores, and more.
15+
Tracing provides deep visibility into execution of your application by capturing detailed telemetry at each execution step. This helps diagnose issues and enhance performance by identifying problems such as inaccurate tool calls, misleading prompts, high latency, low-quality evaluation scores, and more.
1616

1717
This article explains how to implement tracing for AI applications using **OpenAI SDK** with OpenTelemetry in Azure AI Foundry.
1818

@@ -24,47 +24,45 @@ You need the following to complete this tutorial:
2424

2525
* An AI application that uses **OpenAI SDK** to make calls to models hosted in Azure AI Foundry.
2626

27-
2827
## Enable tracing in your project
2928

30-
Azure AI Foundry stores traces in Azure Application Insight resources using OpenTelemetry. By default, new Azure AI Foundry resources don't provision these resources. You can connect projects to an existing Azure Application Insights resource or create a new one from within the project. You do such configuration once per each Azure AI Foundry resource.
29+
Azure AI Foundry stores traces in Azure Application Insights resources using OpenTelemetry. By default, new Azure AI Foundry resources don't provision these resources. You can connect projects to an existing Azure Application Insights resource or create a new one from within the project. You do this configuration once per each Azure AI Foundry resource.
3130

3231
The following steps show how to configure your resource:
3332

3433
1. Go to [Azure AI Foundry portal](https://ai.azure.com) and navigate to your project.
3534

36-
2. On the side navigation bar, select **Tracing**.
35+
1. On the side navigation bar, select **Tracing**.
3736

38-
3. If an Azure Application Insights resource isn't associated with your Azure AI Foundry resource, associate one.
37+
1. If an Azure Application Insights resource isn't associated with your Azure AI Foundry resource, associate one. If you already have an Application Insights resource associated, you won't see the enable page below and you can skip this step.
3938

4039
:::image type="content" source="../../media/how-to/develop/trace-application/configure-app-insight.png" alt-text="A screenshot showing how to configure Azure Application Insights to the Azure AI Foundry resource." lightbox="../../media/how-to/develop/trace-application/configure-app-insight.png":::
4140

42-
4. To reuse an existing Azure Application Insights, use the drop-down **Application Insights resource name** to locate the resource and select **Connect**.
41+
1. To reuse an existing Azure Application Insights, use the drop-down **Application Insights resource name** to locate the resource and select **Connect**.
4342

44-
> [!TIP]
45-
> To connect to an existing Azure Application Insights, you need at least contributor access to the Azure AI Foundry resource (or Hub).
43+
> [!TIP]
44+
> To connect to an existing Azure Application Insights, you need at least contributor access to the Azure AI Foundry resource (or Hub).
4645
47-
5. To connect to a new Azure Application Insights resource, select the option **Create new**.
46+
1. To connect to a new Azure Application Insights resource, select the option **Create new**.
4847

49-
1. Use the configuration wizard to configure the new resource's name.
48+
1. Use the configuration wizard to configure the new resource's name.
5049

51-
2. By default, the new resource is created in the same resource group where the Azure AI Foundry resource was created. Use the **Advance settings** option to configure a different resource group or subscription.
50+
1. By default, the new resource is created in the same resource group where the Azure AI Foundry resource was created. Use the **Advance settings** option to configure a different resource group or subscription.
5251

53-
> [!TIP]
54-
> To create a new Azure Application Insight resource, you also need contributor role to the resource group you selected (or the default one).
52+
> [!TIP]
53+
> To create a new Azure Application Insights resource, you also need contributor role to the resource group you selected (or the default one).
5554
56-
3. Select **Create** to create the resource and connect it to the Azure AI Foundry resource.
55+
1. Select **Create** to create the resource and connect it to the Azure AI Foundry resource.
5756

58-
4. Once the connection is configured, you are ready to use tracing in any project within the resource.
57+
1. Once the connection is configured, you're ready to use tracing in any project within the resource.
5958

60-
5. Go to the landing page of your project and copy the project's endpoint URI. You need it later in the tutorial.
59+
1. Go to the landing page of your project and copy the project's endpoint URI. You need it later.
6160

6261
:::image type="content" source="../../media/how-to/projects/fdp-project-overview.png" alt-text="A screenshot showing how to copy the project endpoint URI." lightbox="../../media/how-to/projects/fdp-project-overview.png":::
6362

6463
> [!IMPORTANT]
6564
> Using a project's endpoint requires configuring Microsoft Entra ID in your application. If you don't have Entra ID configured, use the Azure Application Insights connection string as indicated in step 3 of the tutorial.
6665
67-
6866
## Instrument the OpenAI SDK
6967

7068
When developing with the OpenAI SDK, you can instrument your code so traces are sent to Azure AI Foundry. Follow these steps to instrument your code:
@@ -129,15 +127,15 @@ When developing with the OpenAI SDK, you can instrument your code so traces are
129127

130128
:::image type="content" source="../../media/how-to/develop/trace-application/tracing-display-simple.png" alt-text="A screenshot showing how a simple chat completion request is displayed in the trace." lightbox="../../media/how-to/develop/trace-application/tracing-display-simple.png":::
131129

132-
1. It may be useful to capture sections of your code that mixes business logic with models when developing complex applications. OpenTelemetry uses the concept of spans to capture sections you're interested in. To start generating your own spans, get an instance of the current **tracer** object.
130+
1. It might be useful to capture sections of your code that mixes business logic with models when developing complex applications. OpenTelemetry uses the concept of spans to capture sections you're interested in. To start generating your own spans, get an instance of the current **tracer** object.
133131

134132
```python
135133
from opentelemetry import trace
136134

137135
tracer = trace.get_tracer(__name__)
138136
```
139137

140-
1. Then, use decorators in your method to capture specific scenarios in your code that you are interested in. Such decorators generate spans automatically. The following code example instruments a method called `assess_claims_with_context` with iterates over a list of claims and verify if the claim is supported by the context using an LLM. All the calls made in this method are captured within the same span:
138+
1. Then, use decorators in your method to capture specific scenarios in your code that you're interested in. These decorators generate spans automatically. The following code example instruments a method called `assess_claims_with_context` that iterates over a list of claims and verifies if the claim is supported by the context using an LLM. All the calls made in this method are captured within the same span:
141139

142140
```python
143141
def build_prompt_with_context(claim: str, context: str) -> str:
@@ -170,7 +168,7 @@ When developing with the OpenAI SDK, you can instrument your code so traces are
170168

171169
:::image type="content" source="../../media/how-to/develop/trace-application/tracing-display-decorator.png" alt-text="A screenshot showing how a method using a decorator is displayed in the trace." lightbox="../../media/how-to/develop/trace-application/tracing-display-decorator.png":::
172170

173-
1. You may also want to add extra information to the current span. OpenTelemetry uses the concept of **attributes** for that. Use the `trace` object to access them and include extra information. See how the `assess_claims_with_context` method has been modified to include an attribute:
171+
1. You might also want to add extra information to the current span. OpenTelemetry uses the concept of **attributes** for that. Use the `trace` object to access them and include extra information. See how the `assess_claims_with_context` method has been modified to include an attribute:
174172

175173
```python
176174
@tracer.start_as_current_span("assess_claims_with_context")
@@ -188,12 +186,11 @@ When developing with the OpenAI SDK, you can instrument your code so traces are
188186
responses.append(response.choices[0].message.content.strip('., '))
189187

190188
return responses
191-
```
192-
189+
```
193190

194191
## Trace to console
195192

196-
It may be useful to also trace your application and send the traces to the local execution console. Such approach may be beneficial when running unit tests or integration tests in your application using an automated CI/CD pipeline. Traces can be sent to the console and captured by your CI/CD tool to further analysis.
193+
It might be useful to also trace your application and send the traces to the local execution console. This approach might be beneficial when running unit tests or integration tests in your application using an automated CI/CD pipeline. Traces can be sent to the console and captured by your CI/CD tool for further analysis.
197194

198195
Configure tracing as follows:
199196

@@ -271,6 +268,6 @@ Configure tracing as follows:
271268
}
272269
```
273270

274-
## Next steps
271+
## Related content
275272

276273
* [Trace agents using Azure AI Foundry SDK](trace-agents-sdk.md)

articles/ai-foundry/how-to/view-ai-red-teaming-results.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.service: azure-ai-foundry
66
ms.custom:
77
- references_regions
88
ms.topic: how-to
9-
ms.date: 06/03/2025
9+
ms.date: 08/29/2025
1010
ms.reviewer: minthigpen
1111
ms.author: lagayhar
1212
author: lgayhardt

articles/ai-services/document-intelligence/prebuilt/batch-analysis.md

Lines changed: 17 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about the Document Intelligence Batch analysis API
55
author: laujan
66
ms.service: azure-ai-document-intelligence
77
ms.topic: conceptual
8-
ms.date: 02/25/2025
8+
ms.date: 08/28/2025
99
ms.author: lajanuar
1010
monikerRange: '>=doc-intel-4.0.0'
1111
---
@@ -53,7 +53,7 @@ Review [Managed identities for Document Intelligence](../authentication/managed-
5353

5454
Review [**Create SAS tokens**](../authentication/create-sas-tokens.md) to learn more about generating SAS tokens and how they work.
5555

56-
## Calling the batch analysis API
56+
## Call the batch analysis API
5757

5858
### 1. Specify the input files
5959

@@ -62,16 +62,19 @@ The batch API supports two options for specifying the files to be processed.
6262
* If you want to process all the files in a container or a folder, and the number of files is less than the 10000 limit, use the ```azureBlobSource``` object in your request.
6363

6464
```bash
65-
POST /documentModels/{modelId}:analyzeBatch
65+
POST {endpoint}/documentintelligence/documentModels/{modelId}:analyzeBatch?api-version=2024-11-30
6666

6767
{
6868
"azureBlobSource": {
69-
"containerUrl": "https://myStorageAccount.blob.core.windows.net/myContainer?mySasToken",
70-
...
71-
},
72-
...
69+
"containerUrl": "https://myStorageAccount.blob.core.windows.net/myContainer?mySasToken"
70+
71+
},
72+
{
73+
"resultContainerUrl": "https://myStorageAccount.blob.core.windows.net/myOutputContainer?mySasToken",
74+
"resultPrefix": "trainingDocsResult/"
7375
}
7476

77+
7578
```
7679
7780
* If you don't want to process all the files in a container or folder, but rather specific files in that container or folder, use the ```azureBlobFileListSource``` object. This operation requires a File List JSONL file which lists the files to be processed. Store the JSONL file in the root folder of the container. Here's an example JSONL file with two files listed:
@@ -88,7 +91,7 @@ Use a file list `JSONL` file with the following conditions:
8891
* When you want more control over which files get processed in each batch request;
8992
9093
```bash
91-
POST /documentModels/{modelId}:analyzeBatch
94+
POST {endpoint}/documentintelligence/documentModels/{modelId}:analyzeBatch?api-version=2024-11-30
9295

9396
{
9497
"azureBlobFileListSource": {
@@ -119,13 +122,14 @@ Remember to replace the following sample container URL values with real values f
119122
120123
This example shows a POST request with `azureBlobSource` input
121124
```bash
122-
POST /documentModels/{modelId}:analyzeBatch
125+
POST {endpoint}/documentintelligence/documentModels/{modelId}:analyzeBatch?api-version=2024-11-30
123126
124127
{
125128
"azureBlobSource": {
126129
"containerUrl": "https://myStorageAccount.blob.core.windows.net/myContainer?mySasToken",
127130
"prefix": "inputDocs/"
128131
},
132+
{
129133
"resultContainerUrl": "https://myStorageAccount.blob.core.windows.net/myOutputContainer?mySasToken",
130134
"resultPrefix": "batchResults/",
131135
"overwriteExisting": true
@@ -137,13 +141,14 @@ This example shows a POST request with `azureBlobFileListSource` and a file list
137141
138142
139143
```bash
140-
POST /documentModels/{modelId}:analyzeBatch
144+
POST {endpoint}/documentintelligence/documentModels/{modelId}:analyzeBatch?api-version=2024-11-30
141145
142146
{
143147
"azureBlobFileListSource": {
144148
"containerUrl": "https://myStorageAccount.blob.core.windows.net/myContainer?mySasToken",
145149
"fileList": "myFileList.jsonl"
146150
},
151+
{
147152
"resultContainerUrl": "https://myStorageAccount.blob.core.windows.net/myOutputContainer?mySasToken",
148153
"resultPrefix": "batchResults/",
149154
"overwriteExisting": true
@@ -155,7 +160,7 @@ Here's an example **successful** response
155160
156161
```bash
157162
202 Accepted
158-
Operation-Location: /documentModels/{modelId}/analyzeBatchResults/{resultId}
163+
Operation-Location: /documentintelligence/documentModels/{modelId}/analyzeBatchResults/{resultId}?api-version=2024-11-30
159164
```
160165
161166
### 4. Retrieve API results
@@ -164,7 +169,7 @@ Use the `GET` operation to retrieve batch analysis results after the POST operat
164169
165170
166171
```bash
167-
GET /documentModels/{modelId}/analyzeBatchResults/{resultId}
172+
GET {endpoint}/documentintelligence/documentModels/{modelId}/analyzeBatchResults/{resultId}?api-version=2024-11-30
168173
200 OK
169174
170175
{

0 commit comments

Comments
 (0)