You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-application.md
+23-26Lines changed: 23 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,15 @@ titleSuffix: Azure AI Foundry
4
4
description: Learn how to trace applications that use OpenAI SDK in Azure AI Foundry
5
5
author: lgayhardt
6
6
ms.author: lagayhar
7
-
ms.reviewer: amibp
8
-
ms.date: 05/19/2025
7
+
ms.reviewer: ychen
8
+
ms.date: 08/29/2025
9
9
ms.service: azure-ai-foundry
10
10
ms.topic: how-to
11
11
---
12
12
13
13
# Trace AI applications using OpenAI SDK
14
14
15
-
Tracing provides deep visibility into execution of your application by capturing detailed telemetry at each execution step. Such helps diagnose issues and enhance performance by identifying problems such as inaccurate tool calls, misleading prompts, high latency, low-quality evaluation scores, and more.
15
+
Tracing provides deep visibility into execution of your application by capturing detailed telemetry at each execution step. This helps diagnose issues and enhance performance by identifying problems such as inaccurate tool calls, misleading prompts, high latency, low-quality evaluation scores, and more.
16
16
17
17
This article explains how to implement tracing for AI applications using **OpenAI SDK** with OpenTelemetry in Azure AI Foundry.
18
18
@@ -24,47 +24,45 @@ You need the following to complete this tutorial:
24
24
25
25
* An AI application that uses **OpenAI SDK** to make calls to models hosted in Azure AI Foundry.
26
26
27
-
28
27
## Enable tracing in your project
29
28
30
-
Azure AI Foundry stores traces in Azure Application Insight resources using OpenTelemetry. By default, new Azure AI Foundry resources don't provision these resources. You can connect projects to an existing Azure Application Insights resource or create a new one from within the project. You do such configuration once per each Azure AI Foundry resource.
29
+
Azure AI Foundry stores traces in Azure Application Insights resources using OpenTelemetry. By default, new Azure AI Foundry resources don't provision these resources. You can connect projects to an existing Azure Application Insights resource or create a new one from within the project. You do this configuration once per each Azure AI Foundry resource.
31
30
32
31
The following steps show how to configure your resource:
33
32
34
33
1. Go to [Azure AI Foundry portal](https://ai.azure.com) and navigate to your project.
35
34
36
-
2. On the side navigation bar, select **Tracing**.
35
+
1. On the side navigation bar, select **Tracing**.
37
36
38
-
3. If an Azure Application Insights resource isn't associated with your Azure AI Foundry resource, associate one.
37
+
1. If an Azure Application Insights resource isn't associated with your Azure AI Foundry resource, associate one. If you already have an Application Insights resource associated, you won't see the enable page below and you can skip this step.
39
38
40
39
:::image type="content" source="../../media/how-to/develop/trace-application/configure-app-insight.png" alt-text="A screenshot showing how to configure Azure Application Insights to the Azure AI Foundry resource." lightbox="../../media/how-to/develop/trace-application/configure-app-insight.png":::
41
40
42
-
4. To reuse an existing Azure Application Insights, use the drop-down **Application Insights resource name** to locate the resource and select **Connect**.
41
+
1. To reuse an existing Azure Application Insights, use the drop-down **Application Insights resource name** to locate the resource and select **Connect**.
43
42
44
-
> [!TIP]
45
-
> To connect to an existing Azure Application Insights, you need at least contributor access to the Azure AI Foundry resource (or Hub).
43
+
> [!TIP]
44
+
> To connect to an existing Azure Application Insights, you need at least contributor access to the Azure AI Foundry resource (or Hub).
46
45
47
-
5. To connect to a new Azure Application Insights resource, select the option **Create new**.
46
+
1. To connect to a new Azure Application Insights resource, select the option **Create new**.
48
47
49
-
1. Use the configuration wizard to configure the new resource's name.
48
+
1. Use the configuration wizard to configure the new resource's name.
50
49
51
-
2. By default, the new resource is created in the same resource group where the Azure AI Foundry resource was created. Use the **Advance settings** option to configure a different resource group or subscription.
50
+
1. By default, the new resource is created in the same resource group where the Azure AI Foundry resource was created. Use the **Advance settings** option to configure a different resource group or subscription.
52
51
53
-
> [!TIP]
54
-
> To create a new Azure Application Insight resource, you also need contributor role to the resource group you selected (or the default one).
52
+
> [!TIP]
53
+
> To create a new Azure Application Insights resource, you also need contributor role to the resource group you selected (or the default one).
55
54
56
-
3. Select **Create** to create the resource and connect it to the Azure AI Foundry resource.
55
+
1. Select **Create** to create the resource and connect it to the Azure AI Foundry resource.
57
56
58
-
4. Once the connection is configured, you are ready to use tracing in any project within the resource.
57
+
1. Once the connection is configured, you're ready to use tracing in any project within the resource.
59
58
60
-
5. Go to the landing page of your project and copy the project's endpoint URI. You need it later in the tutorial.
59
+
1. Go to the landing page of your project and copy the project's endpoint URI. You need it later.
61
60
62
61
:::image type="content" source="../../media/how-to/projects/fdp-project-overview.png" alt-text="A screenshot showing how to copy the project endpoint URI." lightbox="../../media/how-to/projects/fdp-project-overview.png":::
63
62
64
63
> [!IMPORTANT]
65
64
> Using a project's endpoint requires configuring Microsoft Entra ID in your application. If you don't have Entra ID configured, use the Azure Application Insights connection string as indicated in step 3 of the tutorial.
66
65
67
-
68
66
## Instrument the OpenAI SDK
69
67
70
68
When developing with the OpenAI SDK, you can instrument your code so traces are sent to Azure AI Foundry. Follow these steps to instrument your code:
@@ -129,15 +127,15 @@ When developing with the OpenAI SDK, you can instrument your code so traces are
129
127
130
128
:::image type="content" source="../../media/how-to/develop/trace-application/tracing-display-simple.png" alt-text="A screenshot showing how a simple chat completion request is displayed in the trace." lightbox="../../media/how-to/develop/trace-application/tracing-display-simple.png":::
131
129
132
-
1. It may be useful to capture sections of your code that mixes business logic with models when developing complex applications. OpenTelemetry uses the concept of spans to capture sections you're interested in. To start generating your own spans, get an instance of the current **tracer** object.
130
+
1. It might be useful to capture sections of your code that mixes business logic with models when developing complex applications. OpenTelemetry uses the concept of spans to capture sections you're interested in. To start generating your own spans, get an instance of the current **tracer** object.
133
131
134
132
```python
135
133
from opentelemetry import trace
136
134
137
135
tracer = trace.get_tracer(__name__)
138
136
```
139
137
140
-
1. Then, use decorators in your method to capture specific scenarios in your code that you are interested in. Such decorators generate spans automatically. The following code example instruments a method called `assess_claims_with_context` with iterates over a list of claims and verify if the claim is supported by the context using an LLM. All the calls made in this method are captured within the same span:
138
+
1. Then, use decorators in your method to capture specific scenarios in your code that you're interested in. These decorators generate spans automatically. The following code example instruments a method called `assess_claims_with_context` that iterates over a list of claims and verifies if the claim is supported by the context using an LLM. All the calls made in this method are captured within the same span:
@@ -170,7 +168,7 @@ When developing with the OpenAI SDK, you can instrument your code so traces are
170
168
171
169
:::image type="content" source="../../media/how-to/develop/trace-application/tracing-display-decorator.png" alt-text="A screenshot showing how a method using a decorator is displayed in the trace." lightbox="../../media/how-to/develop/trace-application/tracing-display-decorator.png":::
172
170
173
-
1. You may also want to add extra information to the current span. OpenTelemetry uses the concept of **attributes** for that. Use the `trace` object to access them and include extra information. See how the `assess_claims_with_context` method has been modified to include an attribute:
171
+
1. You might also want to add extra information to the current span. OpenTelemetry uses the concept of **attributes** for that. Use the `trace` object to access them and include extra information. See how the `assess_claims_with_context` method has been modified to include an attribute:
It may be useful to also trace your application and send the traces to the local execution console. Such approach may be beneficial when running unit tests or integration tests in your application using an automated CI/CD pipeline. Traces can be sent to the console and captured by your CI/CD tool to further analysis.
193
+
It might be useful to also trace your application and send the traces to the local execution console. This approach might be beneficial when running unit tests or integration tests in your application using an automated CI/CD pipeline. Traces can be sent to the console and captured by your CI/CD tool for further analysis.
197
194
198
195
Configure tracing as follows:
199
196
@@ -271,6 +268,6 @@ Configure tracing as follows:
271
268
}
272
269
```
273
270
274
-
## Next steps
271
+
## Related content
275
272
276
273
* [Trace agents using Azure AI Foundry SDK](trace-agents-sdk.md)
GPT-3.5-Turbo, GPT-4, and GPT-4o series models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, which means they accepted a prompt string and returned a completion to append to the prompt. However, the latest models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format. They return a completion that represents a model-written message in the chat. This format was designed specifically for multi-turn conversations, but it can also work well for nonchat scenarios.
17
+
Chat models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, which means they accepted a prompt string and returned a completion to append to the prompt. However, the latest models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format. They return a completion that represents a model-written message in the chat. This format was designed specifically for multi-turn conversations, but it can also work well for nonchat scenarios.
18
18
19
19
This article walks you through getting started with chat completions models. To get the best results, use the techniques described here. Don't try to interact with the models the same way you did with the older model series because the models are often verbose and provide less useful responses.
0 commit comments