Skip to content

Commit 3732e7d

Browse files
committed
Updates from Yan
1 parent f81dd2a commit 3732e7d

File tree

1 file changed

+31
-32
lines changed

1 file changed

+31
-32
lines changed

articles/ai-foundry/how-to/develop/trace-agents-sdk.md

Lines changed: 31 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Trace and Observe AI Agents in Azure AI Foundry
33
titleSuffix: Azure AI Foundry
44
description: Trace and Observe AI Agents in Azure AI Foundry using OpenTelemetry. Learn to see execution traces, debug performance, and monitor AI agent behavior step-by-step.
5-
author: lgayhardt, ychen
5+
author: yanchen-ms
66
ms.author: lagayhar
77
ms.reviewer: ychen
88
ms.date: 09/29/2025
@@ -19,10 +19,10 @@ ms.custom: references_regions
1919
In this article, you learn how to:
2020

2121
- Trace key concepts
22-
- Trace and observe AI agents in Azure AI Foundry
22+
- Trace and observe AI agents in AI Foundry
2323
- Interpret spans (steps, tool calls, nested operations).
2424
- View agent threads in the Agents playground.
25-
- View traces in the Azure AI Foundry portal and Azure Monitor
25+
- View traces in the AI Foundry portal and Azure Monitor
2626

2727
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for many reasons:
2828

@@ -41,7 +41,7 @@ Here's a brief overview of key concepts before getting started:
4141
|---------------------|-----------------------------------------------------------------|
4242
| Traces | Traces capture the journey of a request or workflow through your application by recording events and state changes (function calls, values, system events). See [OpenTelemetry Traces](https://opentelemetry.io/docs/concepts/signals/traces/). |
4343
| Spans | Spans are the building blocks of traces, representing single operations within a trace. Each span captures start and end times, attributes, and can be nested to show hierarchical relationships, allowing you to see the full call stack and sequence of operations. |
44-
| Attributes | Attributes are key-value pairs attached to traces and spans, providing contextual metadata such as function parameters, return values, or custom annotations. These enrich trace data, making it more informative and useful for analysis. |
44+
| Attributes | Attributes are key-value pairs attached to traces and spans, providing contextual metadata such as function parameters, return values, or custom annotations. These enrich trace data making it more informative and useful for analysis. |
4545
| Semantic conventions| OpenTelemetry defines semantic conventions to standardize names and formats for trace data attributes, making it easier to interpret and analyze across tools and platforms. To learn more, see [OpenTelemetry's Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/). |
4646
| Trace exporters | Trace exporters send trace data to backend systems for storage and analysis. Azure AI supports exporting traces to Azure Monitor and other OpenTelemetry-compatible platforms, enabling integration with various observability tools. |
4747

@@ -58,23 +58,19 @@ These advancements have been integrated into Azure AI Foundry, Microsoft Agent F
5858

5959
| New Span/Trace/Attributes | Name | Purpose |
6060
|---------------------|-----------------------------------------------------------------|-----------------------------------------------------------------|
61-
|New span|execute_task|Captures task planning and event propagation, providing insights into how tasks are decomposed and distributed.|
62-
|New child spans under "invoke_agent"|agent_to_agent_interaction|Traces communication between agents|
63-
|New child spans under "invoke_agent"|agent.state.management|Effective context, short or long term memory management|
64-
|New child spans under "invoke_agent"|agent_planning|Logs the agent's internal planning steps|
65-
|New child spans under "invoke_agent"|agent orchestration|Capture agent-to-agent orchestration|
66-
|New attributes in invoke_agent span|tool_definitions|Describes the tool's purpose or configuration|
67-
|New attributes in invoke_agent span|llm_spans|Records model call spans|
68-
|New attributes in "execute_tool" span|tool.call.arguments|Logs the arguments passed during tool invocation|
69-
|New attributes in "execute_tool" span|tool.call.results|Records the results returned by the tool|
70-
|New event|Evaluation - attributes (name, error.type, label)|Enables structured evaluation of agent performance and decision-making|
71-
72-
More details can be found in the following pull requests merged into OpenTelemetry:
73-
* Add tool definition plus tool-related attributes in invoke-agent, inference, and execute-tool spans
74-
* Capture evaluation results for GenAI applications
61+
|New span|execute_task |Captures task planning and event propagation, providing insights into how tasks are decomposed and distributed.|
62+
|New child spans under "invoke_agent" |agent_to_agent_interaction|Traces communication between agents|
63+
|New child spans under "invoke_agent" |agent.state.management|Effective context, short or long term memory management|
64+
|New child spans under "invoke_agent" |agent_planning|Logs the agent's internal planning steps|
65+
|New child spans under "invoke_agent" |agent orchestration|Capture agent-to-agent orchestration|
66+
|New attributes in invoke_agent span |tool_definitions|Describes the tool's purpose or configuration|
67+
|New attributes in invoke_agent span |llm_spans|Records model call spans|
68+
|New attributes in "execute_tool" span |tool.call.arguments|Logs the arguments passed during tool invocation|
69+
|New attributes in "execute_tool" span |tool.call.results|Records the results returned by the tool|
70+
|New event|Evaluation - attributes (name, error.type, label) |Enables structured evaluation of agent performance and decision-making|
7571
7672

77-
## Set up tracing in Azure AI Foundry SDK
73+
## Setup tracing in Azure AI Foundry SDK
7874

7975
For chat completions or building agents with Azure AI Foundry, install:
8076

@@ -96,13 +92,13 @@ To view traces in Azure AI Foundry, you need to connect an Application Insights
9692

9793
## Instrument tracing in your code
9894

99-
To trace the content of chat messages, set the `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment variable to true (case insensitive). Keep in mind that this might contain personal data. To learn more, see [Azure Core Tracing OpenTelemetry client library for Python](/python/api/overview/azure/core-tracing-opentelemetry-readme).
95+
To trace the content of chat messages, set the `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment variable to true (case insensitive). Keep in mind this might contain personal data. To learn more, see [Azure Core Tracing OpenTelemetry client library for Python](/python/api/overview/azure/core-tracing-opentelemetry-readme).
10096

10197
```python
10298
import os
10399
os.environ["AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED"] = "true" # False by default
104100
```
105-
Let's begin instrumenting our agent with OpenTelemetry tracing, by starting off with authenticating and connecting to your Azure AI Project using the `AIProjectClient`.
101+
Let's begin instrumenting our agent with OpenTelemetry tracing by starting with authenticating and connecting to your Azure AI Project using the `AIProjectClient`.
106102

107103
```python
108104
from azure.ai.projects import AIProjectClient
@@ -145,7 +141,7 @@ After running your agent, you can begin to [view traces in Azure AI Foundry Port
145141

146142
### Log traces locally
147143

148-
To connect to [Aspire Dashboard](https://aspiredashboard.com/#start) or another OpenTelemetry-compatible backend, install the OpenTelemetry Protocol (OTLP) exporter. This enables you to print traces to the console or use a local viewer such as Aspire Dashboard.
144+
To connect to [Aspire Dashboard](https://aspiredashboard.com/#start) or another OpenTelemetry compatible backend, install the OpenTelemetry Protocol (OTLP) exporter. This enables you to print traces to the console or use a local viewer such as Aspire Dashboard.
149145

150146
```bash
151147
pip install azure-core-tracing-opentelemetry opentelemetry-exporter-otlp opentelemetry-sdk
@@ -194,7 +190,7 @@ with tracer.start_as_current_span("example-tracing"):
194190

195191
### Alternative: AI Toolkit for VS Code
196192

197-
AI Toolkit gives you a simple way to trace locally in VS Code. It uses a local OTLP-compatible collector, making it great for development and debugging.
193+
AI Toolkit gives you a simple way to trace locally in VS Code. It uses a local OTLP-compatible collector, making it ideal for development and debugging.
198194

199195
The toolkit supports AI frameworks like Azure AI Foundry Agents Service, OpenAI, Anthropic, and LangChain through OpenTelemetry. You can see traces instantly in VS Code without needing cloud access.
200196

@@ -250,7 +246,7 @@ The user feedback event body has the following structure:
250246

251247
To identify your service via a unique ID in Application Insights, you can use the service name OpenTelemetry property in your trace data. This is useful if you're logging data from multiple applications to the same Application Insights resource, and you want to differentiate between them.
252248

253-
For example, let's say you have two applications: **App-1** and **App-2**, with tracing configured to log data to the same Application Insights resource. Perhaps you'd like to set up **App-1** to be evaluated continuously by **Relevance** and **App-2** to be evaluated continuously by **Relevance**. You can use the service name to filter by `Application` when monitoring your application in Azure AI Foundry Portal.
249+
For example, let's say you have two applications: **App-1** and **App-2**, with tracing configured to log data to the same Application Insights resource. Perhaps you'd like to set up **App-1** to be evaluated continuously by **Relevance** and **App-2** to be evaluated continuously by **Relevance**. You can use the service name to filter by `Application` when monitoring your application in AI Foundry Portal.
254250

255251
To set up the service name property, you can do so directly in your application code by following the steps, see [Using multiple tracer providers with different Resource](https://opentelemetry.io/docs/languages/python/cookbook/#using-multiple-tracer-providers-with-different-resource). Alternatively, you can set the environment variable `OTEL_SERVICE_NAME` before deploying your app. To learn more about working with the service name, see [OTEL Environment Variables](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#general-sdk-configuration) and [Service Resource Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/resource/#service).
256252

@@ -261,10 +257,13 @@ To query trace data for a given service name, query for the `cloud_roleName` pro
261257
```
262258

263259
## Integrations
264-
Azure AI Foundry makes it easy to log traces with minimal changes by using our tracing integrations with Microsoft Agent Framework, Semantic Kernel, LangChain, LangGraph and OpenAI Agent SDK.
260+
Azure AI Foundry makes it easy to log traces with minimal changes by using our tracing integrations with Microsoft Agent Framework, Semantic Kernel, LangChain, LangGraph, and OpenAI Agent SDK.
265261

266-
### Agents built on Microsoft Agent Framework and Semantic Kernel
267-
Azure AI Foundry has native integrations with Microsoft Agent Framework and Semantic Kernel. Agents built on these two frameworks get out-of-the-box tracing and evaluations support in Foundry Observability.
262+
### Tracing agents built on Microsoft Agent Framework and Semantic Kernel
263+
Azure AI Foundry has native integrations with Microsoft Agent Framework and Semantic Kernel. Agents built on these two frameworks get out-of-the-box tracing in Azure AI Foundry Observability.
264+
265+
- Learn more about tracing in [Semantic Kernel](semantic-kernel/concepts/enterprise-readiness/observability)
266+
- Learn more about tracing in [Microsoft Agent Framework](/agent-framework/user-guide/workflows/observability)
268267

269268
### Enable tracing for Agents built on LangChain & LangGraph
270269

@@ -301,7 +300,7 @@ pip install \
301300
- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI endpoint URL.
302301
- `AZURE_OPENAI_CHAT_DEPLOYMENT`: The chat model deployment name.
303302
- `AZURE_OPENAI_VERSION`: API version, for example `2024-08-01-preview`.
304-
- Azure credentials are resolved via `DefaultAzureCredential` (supports env vars, managed identity, VS Code sign-in, etc.).
303+
- Azure credentials are resolved via `DefaultAzureCredential` (supports environment variables, managed identity, VS Code sign-in, etc.).
305304

306305
You can store these in a `.env` file for local development.
307306

@@ -641,7 +640,7 @@ Attach `callbacks=[azure_tracer]` to your chains, tools, or agents to ensure Lan
641640

642641
### Enable tracing for Agents built on OpenAI Agents SDK
643642

644-
Use this snippet to configure OpenTelemetry tracing for the OpenAI Agents SDK and instrument the framework. It exports to Azure Monitor if `APPLICATION_INSIGHTS_CONNECTION_STRING` is set; otherwise it falls back to console.
643+
Use this snippet to configure OpenTelemetry tracing for the OpenAI Agents SDK and instrument the framework. It exports to Azure Monitor if `APPLICATION_INSIGHTS_CONNECTION_STRING` is set; otherwise, it falls back to the console.
645644

646645
```python
647646
import os
@@ -698,7 +697,7 @@ After selecting **Thread logs**, review:
698697
- Thread details
699698
- Run information
700699
- Ordered run steps and tool calls
701-
- Inputs / outputs between user and agent
700+
- Inputs and outputs between user and agent
702701
- Linked evaluation metrics (if enabled)
703702

704703
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
@@ -722,11 +721,11 @@ By selecting a trace, you can step through each span and identify issues while o
722721

723722
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
724723

725-
For more information on how to send Azure AI Inference traces to Azure Monitor and create an Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
724+
For more information on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
726725

727726
## Related content
728727

729728
- [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
730729
- [Python samples for tracing agents with console tracing and Azure Monitor](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-agents/samples/agents_telemetry)
731730
- [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples/v1-beta/typescript/src) containing fully runnable JavaScript code for tracing using synchronous and asynchronous clients.
732-
- [C# Samples](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Inference_1.0.0-beta.2/sdk/ai/Azure.AI.Inference/samples/Sample8_ChatCompletionsWithOpenTelemetry.md) containing fully runnable C# code for doing inference using synchronous and asynchronous methods.
731+
- [C# Samples](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Inference_1.0.0-beta.2/sdk/ai/Azure.AI.Inference/samples/Sample8_ChatCompletionsWithOpenTelemetry.md) containing fully runnable C# code for doing inference using synchronous and asynchronous methods.

0 commit comments

Comments
 (0)