You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-agents-sdk.md
+45-44Lines changed: 45 additions & 44 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ titleSuffix: Azure AI Foundry
4
4
description: View trace results for AI agents using Azure AI Foundry SDK and OpenTelemetry. Learn to see execution traces, debug performance, and monitor AI agent behavior step-by-step.
5
5
author: lgayhardt
6
6
ms.author: lagayhar
7
-
ms.reviewer: amibp
8
-
ms.date: 09/15/2025
7
+
ms.reviewer: ychen
8
+
ms.date: 09/18/2025
9
9
ms.service: azure-ai-foundry
10
10
ms.topic: how-to
11
11
ai-usage: ai-assisted
@@ -25,47 +25,16 @@ In this article, you learn how to:
25
25
- Retrieve traces for past threads.
26
26
- Plan optimization next steps.
27
27
28
-
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for a number of reasons:
29
-
* There could be a high number of steps involved in generating a response, making it hard to keep track of all of them.
30
-
* The sequence of steps might vary based on user input.
31
-
* The inputs/outputs at each stage might be long and deserve more detailed inspection.
32
-
* Each step of an agent's runtime might also involve nesting. For example, an agent might invoke a tool, which uses another process, which then invokes another tool. If you notice strange or incorrect output from a top-level agent run, it might be difficult to determine exactly where in the execution the issue was introduced.
28
+
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for many reasons:
33
29
34
-
Trace results solve this by allowing you to view the inputs and outputs of each primitive involved in a particular agent run, displayed in the order they were invoked, making it easy to understand and debug your AI agent's behavior.
35
-
36
-
## View trace results in the Azure AI Foundry Agents playground
37
-
38
-
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
39
-
40
-
> [!NOTE]
41
-
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
42
-
> * Evaluations are not available in the following regions.
43
-
> *`australiaeast`
44
-
> *`japaneast`
45
-
> *`southindia`
46
-
> *`uksouth`
30
+
- There could be a high number of steps involved in generating a response, making it hard to keep track of all of them.
31
+
- The sequence of steps might vary based on user input.
32
+
- The inputs/outputs at each stage might be long and deserve more detailed inspection.
33
+
- Each step of an agent's runtime might also involve nesting. For example, an agent might invoke a tool, which uses another process, which then invokes another tool. If you notice strange or incorrect output from a top-level agent run, it might be difficult to determine exactly where in the execution the issue was introduced.
47
34
48
-
:::image type="content" source="../../media/trace/trace-agent-playground.png" alt-text="A screenshot of the agent playground in the Azure AI Foundry portal." lightbox="../../media/trace/trace-agent-playground.png":::
49
-
50
-
After selecting **Thread logs**, review:
51
-
- Thread details
52
-
- Run information
53
-
- Ordered run steps and tool calls
54
-
- Inputs / outputs between user and agent
55
-
- Linked evaluation metrics (if enabled)
56
-
57
-
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
58
-
59
-
> [!TIP]
60
-
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
61
-
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
62
-
> You will be able to see the **Thread logs** button at the top of the screen to view the trace results.
63
-
64
-
65
-
> [!NOTE]
66
-
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
35
+
Trace results solve this by allowing you to view the inputs and outputs of each primitive involved in a particular agent run, displayed in the order they were invoked, making it easy to understand and debug your AI agent's behavior.
67
36
68
-
## Trace agents using the Azure AI Foundry SDK
37
+
## Trace key concepts overview
69
38
70
39
Here's a brief overview of key concepts before getting started:
71
40
@@ -83,7 +52,8 @@ Here's a brief overview of key concepts before getting started:
83
52
- Correlate evaluation run IDs for quality + performance analysis.
84
53
- Redact sensitive content; avoid storing secrets in attributes.
85
54
86
-
## Setup
55
+
56
+
## Setup tracing in Azure AI Foundry SDK
87
57
88
58
For chat completions or building agents with Azure AI Foundry, install:
89
59
@@ -232,9 +202,7 @@ For detailed instructions and advanced usage, refer to the [OpenTelemetry docume
232
202
233
203
## Attach user feedback to traces
234
204
235
-
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
236
-
237
-
205
+
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
238
206
239
207
By correlating feedback traces with their respective chat request traces using the response ID or thread ID, you can view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
Once necessary packages are installed, you can easily begin to [Instrument tracing in your code](#instrument-tracing-in-your-code).
275
243
244
+
## View trace results in the Azure AI Foundry Agents playground
245
+
246
+
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
247
+
248
+
> [!NOTE]
249
+
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
250
+
> - Evaluations aren't available in the following regions.
251
+
> -`australiaeast`
252
+
> -`japaneast`
253
+
> -`southindia`
254
+
> -`uksouth`
255
+
256
+
:::image type="content" source="../../media/trace/trace-agent-playground.png" alt-text="A screenshot of the agent playground in the Azure AI Foundry portal." lightbox="../../media/trace/trace-agent-playground.png":::
257
+
258
+
After selecting **Thread logs**, review:
259
+
260
+
- Thread details
261
+
- Run information
262
+
- Ordered run steps and tool calls
263
+
- Inputs / outputs between user and agent
264
+
- Linked evaluation metrics (if enabled)
265
+
266
+
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
267
+
268
+
> [!TIP]
269
+
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
270
+
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
271
+
> You'll be able to see the **Thread logs** button at the top of the screen to view the trace results.
272
+
273
+
274
+
> [!NOTE]
275
+
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
276
+
276
277
## View traces in Azure AI Foundry portal
277
278
278
279
In your project, go to `Tracing` to filter your traces as you see fit.
0 commit comments