Skip to content

Commit 8392214

Browse files
Merge pull request #7177 from MicrosoftDocs/main
Auto Publish – main to live - 2025-09-20 05:02 UTC
2 parents d073531 + e46476f commit 8392214

File tree

1 file changed

+46
-44
lines changed

1 file changed

+46
-44
lines changed

articles/ai-foundry/how-to/develop/trace-agents-sdk.md

Lines changed: 46 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,12 @@ titleSuffix: Azure AI Foundry
44
description: View trace results for AI agents using Azure AI Foundry SDK and OpenTelemetry. Learn to see execution traces, debug performance, and monitor AI agent behavior step-by-step.
55
author: lgayhardt
66
ms.author: lagayhar
7-
ms.reviewer: amibp
8-
ms.date: 09/15/2025
7+
ms.reviewer: ychen
8+
ms.date: 09/18/2025
99
ms.service: azure-ai-foundry
1010
ms.topic: how-to
1111
ai-usage: ai-assisted
12+
ms.custom: references_regions
1213
---
1314

1415
# View trace results for AI agents in Azure AI Foundry (preview)
@@ -25,47 +26,16 @@ In this article, you learn how to:
2526
- Retrieve traces for past threads.
2627
- Plan optimization next steps.
2728

28-
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for a number of reasons:
29-
* There could be a high number of steps involved in generating a response, making it hard to keep track of all of them.
30-
* The sequence of steps might vary based on user input.
31-
* The inputs/outputs at each stage might be long and deserve more detailed inspection.
32-
* Each step of an agent's runtime might also involve nesting. For example, an agent might invoke a tool, which uses another process, which then invokes another tool. If you notice strange or incorrect output from a top-level agent run, it might be difficult to determine exactly where in the execution the issue was introduced.
29+
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for many reasons:
3330

34-
Trace results solve this by allowing you to view the inputs and outputs of each primitive involved in a particular agent run, displayed in the order they were invoked, making it easy to understand and debug your AI agent's behavior.
35-
36-
## View trace results in the Azure AI Foundry Agents playground
37-
38-
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
39-
40-
> [!NOTE]
41-
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
42-
> * Evaluations are not available in the following regions.
43-
> * `australiaeast`
44-
> * `japaneast`
45-
> * `southindia`
46-
> * `uksouth`
31+
- There could be a high number of steps involved in generating a response, making it hard to keep track of all of them.
32+
- The sequence of steps might vary based on user input.
33+
- The inputs/outputs at each stage might be long and deserve more detailed inspection.
34+
- Each step of an agent's runtime might also involve nesting. For example, an agent might invoke a tool, which uses another process, which then invokes another tool. If you notice strange or incorrect output from a top-level agent run, it might be difficult to determine exactly where in the execution the issue was introduced.
4735

48-
:::image type="content" source="../../media/trace/trace-agent-playground.png" alt-text="A screenshot of the agent playground in the Azure AI Foundry portal." lightbox="../../media/trace/trace-agent-playground.png":::
49-
50-
After selecting **Thread logs**, review:
51-
- Thread details
52-
- Run information
53-
- Ordered run steps and tool calls
54-
- Inputs / outputs between user and agent
55-
- Linked evaluation metrics (if enabled)
56-
57-
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
58-
59-
> [!TIP]
60-
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
61-
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
62-
> You will be able to see the **Thread logs** button at the top of the screen to view the trace results.
63-
64-
65-
> [!NOTE]
66-
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
36+
Trace results solve this by allowing you to view the inputs and outputs of each primitive involved in a particular agent run, displayed in the order they were invoked, making it easy to understand and debug your AI agent's behavior.
6737

68-
## Trace agents using the Azure AI Foundry SDK
38+
## Trace key concepts overview
6939

7040
Here's a brief overview of key concepts before getting started:
7141

@@ -83,7 +53,8 @@ Here's a brief overview of key concepts before getting started:
8353
- Correlate evaluation run IDs for quality + performance analysis.
8454
- Redact sensitive content; avoid storing secrets in attributes.
8555

86-
## Setup
56+
57+
## Setup tracing in Azure AI Foundry SDK
8758

8859
For chat completions or building agents with Azure AI Foundry, install:
8960

@@ -232,9 +203,7 @@ For detailed instructions and advanced usage, refer to the [OpenTelemetry docume
232203

233204
## Attach user feedback to traces
234205

235-
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
236-
237-
206+
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
238207

239208
By correlating feedback traces with their respective chat request traces using the response ID or thread ID, you can view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
240209

@@ -273,6 +242,39 @@ pip install opentelemetry-instrumentation-langchain
273242

274243
Once necessary packages are installed, you can easily begin to [Instrument tracing in your code](#instrument-tracing-in-your-code).
275244

245+
## View trace results in the Azure AI Foundry Agents playground
246+
247+
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
248+
249+
> [!NOTE]
250+
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
251+
> - Evaluations aren't available in the following regions.
252+
> - `australiaeast`
253+
> - `japaneast`
254+
> - `southindia`
255+
> - `uksouth`
256+
257+
:::image type="content" source="../../media/trace/trace-agent-playground.png" alt-text="A screenshot of the agent playground in the Azure AI Foundry portal." lightbox="../../media/trace/trace-agent-playground.png":::
258+
259+
After selecting **Thread logs**, review:
260+
261+
- Thread details
262+
- Run information
263+
- Ordered run steps and tool calls
264+
- Inputs / outputs between user and agent
265+
- Linked evaluation metrics (if enabled)
266+
267+
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
268+
269+
> [!TIP]
270+
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
271+
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
272+
> You'll be able to see the **Thread logs** button at the top of the screen to view the trace results.
273+
274+
275+
> [!NOTE]
276+
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
277+
276278
## View traces in Azure AI Foundry portal
277279

278280
In your project, go to `Tracing` to filter your traces as you see fit.

0 commit comments

Comments
 (0)