You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-agents-sdk.md
+46-44Lines changed: 46 additions & 44 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,12 @@ titleSuffix: Azure AI Foundry
4
4
description: View trace results for AI agents using Azure AI Foundry SDK and OpenTelemetry. Learn to see execution traces, debug performance, and monitor AI agent behavior step-by-step.
5
5
author: lgayhardt
6
6
ms.author: lagayhar
7
-
ms.reviewer: amibp
8
-
ms.date: 09/15/2025
7
+
ms.reviewer: ychen
8
+
ms.date: 09/18/2025
9
9
ms.service: azure-ai-foundry
10
10
ms.topic: how-to
11
11
ai-usage: ai-assisted
12
+
ms.custom: references_regions
12
13
---
13
14
14
15
# View trace results for AI agents in Azure AI Foundry (preview)
@@ -25,47 +26,16 @@ In this article, you learn how to:
25
26
- Retrieve traces for past threads.
26
27
- Plan optimization next steps.
27
28
28
-
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for a number of reasons:
29
-
* There could be a high number of steps involved in generating a response, making it hard to keep track of all of them.
30
-
* The sequence of steps might vary based on user input.
31
-
* The inputs/outputs at each stage might be long and deserve more detailed inspection.
32
-
* Each step of an agent's runtime might also involve nesting. For example, an agent might invoke a tool, which uses another process, which then invokes another tool. If you notice strange or incorrect output from a top-level agent run, it might be difficult to determine exactly where in the execution the issue was introduced.
29
+
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for many reasons:
33
30
34
-
Trace results solve this by allowing you to view the inputs and outputs of each primitive involved in a particular agent run, displayed in the order they were invoked, making it easy to understand and debug your AI agent's behavior.
35
-
36
-
## View trace results in the Azure AI Foundry Agents playground
37
-
38
-
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
39
-
40
-
> [!NOTE]
41
-
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
42
-
> * Evaluations are not available in the following regions.
43
-
> *`australiaeast`
44
-
> *`japaneast`
45
-
> *`southindia`
46
-
> *`uksouth`
31
+
- There could be a high number of steps involved in generating a response, making it hard to keep track of all of them.
32
+
- The sequence of steps might vary based on user input.
33
+
- The inputs/outputs at each stage might be long and deserve more detailed inspection.
34
+
- Each step of an agent's runtime might also involve nesting. For example, an agent might invoke a tool, which uses another process, which then invokes another tool. If you notice strange or incorrect output from a top-level agent run, it might be difficult to determine exactly where in the execution the issue was introduced.
47
35
48
-
:::image type="content" source="../../media/trace/trace-agent-playground.png" alt-text="A screenshot of the agent playground in the Azure AI Foundry portal." lightbox="../../media/trace/trace-agent-playground.png":::
49
-
50
-
After selecting **Thread logs**, review:
51
-
- Thread details
52
-
- Run information
53
-
- Ordered run steps and tool calls
54
-
- Inputs / outputs between user and agent
55
-
- Linked evaluation metrics (if enabled)
56
-
57
-
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
58
-
59
-
> [!TIP]
60
-
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
61
-
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
62
-
> You will be able to see the **Thread logs** button at the top of the screen to view the trace results.
63
-
64
-
65
-
> [!NOTE]
66
-
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
36
+
Trace results solve this by allowing you to view the inputs and outputs of each primitive involved in a particular agent run, displayed in the order they were invoked, making it easy to understand and debug your AI agent's behavior.
67
37
68
-
## Trace agents using the Azure AI Foundry SDK
38
+
## Trace key concepts overview
69
39
70
40
Here's a brief overview of key concepts before getting started:
71
41
@@ -83,7 +53,8 @@ Here's a brief overview of key concepts before getting started:
83
53
- Correlate evaluation run IDs for quality + performance analysis.
84
54
- Redact sensitive content; avoid storing secrets in attributes.
85
55
86
-
## Setup
56
+
57
+
## Setup tracing in Azure AI Foundry SDK
87
58
88
59
For chat completions or building agents with Azure AI Foundry, install:
89
60
@@ -232,9 +203,7 @@ For detailed instructions and advanced usage, refer to the [OpenTelemetry docume
232
203
233
204
## Attach user feedback to traces
234
205
235
-
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
236
-
237
-
206
+
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
238
207
239
208
By correlating feedback traces with their respective chat request traces using the response ID or thread ID, you can view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
Once necessary packages are installed, you can easily begin to [Instrument tracing in your code](#instrument-tracing-in-your-code).
275
244
245
+
## View trace results in the Azure AI Foundry Agents playground
246
+
247
+
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
248
+
249
+
> [!NOTE]
250
+
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
251
+
> - Evaluations aren't available in the following regions.
252
+
> -`australiaeast`
253
+
> -`japaneast`
254
+
> -`southindia`
255
+
> -`uksouth`
256
+
257
+
:::image type="content" source="../../media/trace/trace-agent-playground.png" alt-text="A screenshot of the agent playground in the Azure AI Foundry portal." lightbox="../../media/trace/trace-agent-playground.png":::
258
+
259
+
After selecting **Thread logs**, review:
260
+
261
+
- Thread details
262
+
- Run information
263
+
- Ordered run steps and tool calls
264
+
- Inputs / outputs between user and agent
265
+
- Linked evaluation metrics (if enabled)
266
+
267
+
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
268
+
269
+
> [!TIP]
270
+
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
271
+
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
272
+
> You'll be able to see the **Thread logs** button at the top of the screen to view the trace results.
273
+
274
+
275
+
> [!NOTE]
276
+
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
277
+
276
278
## View traces in Azure AI Foundry portal
277
279
278
280
In your project, go to `Tracing` to filter your traces as you see fit.
0 commit comments