Skip to content

Commit 06de4c8

Browse files
committed
feat: traces
1 parent 164e22a commit 06de4c8

File tree

7 files changed

+437
-177
lines changed

7 files changed

+437
-177
lines changed
Lines changed: 250 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,250 @@
1+
---
2+
title: How to trace your AI application
3+
titleSuffix: Azure AI Foundry
4+
description: This article provides instructions on how to trace your application with Azure AI Inference SDK.
5+
author: lgayhardt
6+
ms.author: lagayhar
7+
manager: scottpolly
8+
ms.reviewer: amibp
9+
ms.date: 05/19/2025
10+
ms.service: azure-ai-foundry
11+
ms.topic: how-to
12+
ms.custom:
13+
- build-2024
14+
- ignite-2024
15+
- build-aifnd
16+
- build-2025
17+
---
18+
19+
# Tracing your AI application (preview)
20+
21+
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
22+
23+
Tracing provides deep visibility into execution of your application by capturing detailed telemetry at each execution step. This helps diagnose issues and enhance performance by identifying problems such as inaccurate tool calls, misleading prompts, high latency, low-quality evaluation scores, and more.
24+
25+
This article walks you through how to instrument tracing in your AI applications using OpenTelemetry and Azure Monitor for enhanced observability and debugging.
26+
27+
Here's a brief overview of key concepts before getting started:
28+
29+
| Key concepts | Description |
30+
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
31+
| Traces | Traces capture the journey of a request or workflow through your application by recording events and state changes during execution, such as function calls, variable values, and system events. To learn more, see [OpenTelemetry Traces](https://opentelemetry.io/docs/concepts/signals/traces/). |
32+
| Spans | Spans are the building blocks of traces, representing single operations within a trace. Each span captures start and end times, attributes, and can be nested to show hierarchical relationships, allowing you to see the full call stack and sequence of operations. |
33+
| Attributes | Attributes are key-value pairs attached to traces and spans, providing contextual metadata such as function parameters, return values, or custom annotations. These enrich trace data making it more informative and useful for analysis. |
34+
| Semantic conventions| OpenTelemetry defines semantic conventions to standardize names and formats for trace data attributes, making it easier to interpret and analyze across tools and platforms. To learn more, see [OpenTelemetry's Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/). |
35+
| Trace exporters | Trace exporters send trace data to backend systems for storage and analysis. Azure AI supports exporting traces to Azure Monitor and other OpenTelemetry-compatible platforms, enabling integration with various observability tools. |
36+
37+
## Setup
38+
39+
For chat completions or building agents with Azure AI Foundry, install:
40+
41+
```bash
42+
pip install azure-ai-projects azure-identity
43+
```
44+
45+
To instrument tracing, you need to install the following instrumentation libraries:
46+
47+
```bash
48+
pip install azure-monitor-opentelemetry opentelemetry-sdk
49+
```
50+
51+
To view traces in Azure AI Foundry, you need to connect an Application Insights resource to your Azure AI Foundry project.
52+
53+
1. Navigate to **Tracing** in the left navigation pane of the Azure AI Foundry portal.
54+
2. Create a new Application Insights resource if you don't already have one.
55+
3. Connect the resource to your AI Foundry project.
56+
57+
## Instrument tracing in your code
58+
59+
To trace the content of chat messages, set the `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment variable to true (case insensitive). Keep in mind this might contain personal data. To learn more, see [Azure Core Tracing OpenTelemetry client library for Python](/python/api/overview/azure/core-tracing-opentelemetry-readme).
60+
61+
```python
62+
import os
63+
os.environ["AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED"] = "true" # False by default
64+
```
65+
Let's begin instrumenting our agent with OpenTelemetry tracing, by starting off with authenticating and connecting to your Azure AI Project using the `AIProjectClient`.
66+
67+
```python
68+
from azure.ai.projects import AIProjectClient
69+
from azure.identity import DefaultAzureCredential
70+
project_client = AIProjectClient.from_connection_string(
71+
credential=DefaultAzureCredential(),
72+
endpoint=os.environ["PROJECT_ENDPOINT"],
73+
)
74+
```
75+
76+
Next, retrieve the connection string from the Application Insights resource connected to your project and set up the OTLP exporters to send telemetry into Azure Monitor.
77+
78+
```python
79+
from azure.monitor.opentelemetry import configure_azure_monitor
80+
connection_string = project_client.telemetry.get_connection_string()
81+
82+
if not connection_string:
83+
print("Application Insights is not enabled. Enable by going to Tracing in your Azure AI Foundry project.")
84+
exit()
85+
86+
configure_azure_monitor(connection_string=connection_string) #enable telemetry collection
87+
```
88+
89+
Now, trace your code where you create and execute your agent and user message in your Azure AI Project, so you can see detailed steps for troubleshooting or monitoring.
90+
91+
```python
92+
from opentelemetry import trace
93+
tracer = trace.get_tracer(__name__)
94+
95+
with tracer.start_as_current_span("example-tracing"):
96+
agent = project_client.agents.create_agent(
97+
model=os.environ["MODEL_DEPLOYMENT_NAME"],
98+
name="my-assistant",
99+
instructions="You are a helpful assistant"
100+
)
101+
thread = project_client.agents.create_thread()
102+
message = project_client.agents.create_message(
103+
thread_id=thread.id, role="user", content="Tell me a joke"
104+
)
105+
run = project_client.agents.create_run(thread_id=thread.id, agent_id=agent.id)
106+
```
107+
108+
After running your agent, you can go begin to [view traces in Azure AI Foundry Portal](#view-traces-in-azure-ai-foundry-portal).
109+
110+
### Log traces locally
111+
112+
To connect to [Aspire Dashboard](https://aspiredashboard.com/#start) or another OpenTelemetry compatible backend, install the OpenTelemetry Protocol (OTLP) exporter. This enables you to print traces to the console or use a local viewer such as Aspire Dashboard.
113+
114+
```bash
115+
pip install azure-core-tracing-opentelemetry opentelemetry-exporter-otlp opentelemetry-sdk
116+
```
117+
Next, you want to configure tracing for your application.
118+
119+
```python
120+
from azure.core.settings import settings
121+
settings.tracing_implementation = "opentelemetry"
122+
123+
from opentelemetry import trace
124+
from opentelemetry.sdk.trace import TracerProvider
125+
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter
126+
127+
# Setup tracing to console
128+
span_exporter = ConsoleSpanExporter()
129+
tracer_provider = TracerProvider()
130+
tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter))
131+
trace.set_tracer_provider(tracer_provider)
132+
```
133+
Use `enable_telemetry` to begin collecting telemetry.
134+
135+
```python
136+
from azure.ai.projects import enable_telemetry
137+
enable_telemetry(destination=sys.stdout)
138+
139+
# Logging to an OTLP endpoint, change the destination to
140+
# enable_telemetry(destination="http://localhost:4317")
141+
```
142+
```python
143+
# Start tracing
144+
from opentelemetry import trace
145+
tracer = trace.get_tracer(__name__)
146+
147+
with tracer.start_as_current_span("example-tracing"):
148+
agent = project_client.agents.create_agent(
149+
model=os.environ["MODEL_DEPLOYMENT_NAME"],
150+
name="my-assistant",
151+
instructions="You are a helpful assistant"
152+
)
153+
thread = project_client.agents.create_thread()
154+
message = project_client.agents.create_message(
155+
thread_id=thread.id, role="user", content="Tell me a joke"
156+
)
157+
run = project_client.agents.create_run(thread_id=thread.id, agent_id=agent.id)
158+
```
159+
160+
## Trace custom functions
161+
162+
To trace your custom functions, use the OpenTelemetry SDK to instrument your code.
163+
164+
1. **Set up a tracer provider**: Initialize a tracer provider to manage and create spans.
165+
2. **Create spans**: Wrap the code you want to trace with spans. Each span represents a unit of work and can be nested to form a trace tree.
166+
3. **Add attributes**: Enrich spans with attributes to provide more context for the trace data.
167+
4. **Configure an exporter**: Send the trace data to a backend for analysis and visualization.
168+
169+
Here’s an example of tracing a custom function:
170+
171+
```python
172+
from opentelemetry import trace
173+
from opentelemetry.trace import SpanKind
174+
175+
# Initialize tracer
176+
tracer = trace.get_tracer(__name__)
177+
178+
def custom_function():
179+
with tracer.start_as_current_span("custom_function") as span:
180+
span.set_attribute("custom_attribute", "value")
181+
# Your function logic here
182+
print("Executing custom function")
183+
184+
custom_function()
185+
```
186+
187+
For detailed instructions and advanced usage, refer to the [OpenTelemetry documentation](https://opentelemetry.io/docs/).
188+
189+
## Attach user feedback to traces
190+
191+
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
192+
193+
194+
195+
By correlating feedback traces with their respective chat request traces using the response ID or thread ID, you can view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
196+
197+
To log user feedback, follow this format:
198+
199+
The user feedback evaluation event can be captured if and only if the user provided a reaction to the GenAI model response. It SHOULD, when possible, be parented to the GenAI span describing such response.
200+
201+
202+
The user feedback event body has the following structure:
203+
204+
| Body Field | Type | Description | Examples | Requirement Level |
205+
|---|---|---|---|---|
206+
| `comment` | string | Additional details about the user feedback | `"I did not like it"` | `Opt-in` |
207+
208+
## Using service name in trace data
209+
210+
To identify your service via a unique ID in Application Insights, you can use the service name OpenTelemetry property in your trace data. This is useful if you're logging data from multiple applications to the same Application Insights resource, and you want to differentiate between them.
211+
212+
For example, let's say you have two applications: **App-1** and **App-2**, with tracing configured to log data to the same Application Insights resource. Perhaps you'd like to set up **App-1** to be evaluated continuously by **Relevance** and **App-2** to be evaluated continuously by **Relevance**. You can use the service name to filter by `Application` when monitoring your application in AI Foundry Portal.
213+
214+
To set up the service name property, you can do so directly in your application code by following the steps, see [Using multiple tracer providers with different Resource](https://opentelemetry.io/docs/languages/python/cookbook/#using-multiple-tracer-providers-with-different-resource). Alternatively, you can set the environment variable `OTEL_SERVICE_NAME` before deploying your app. To learn more about working with the service name, see [OTEL Environment Variables](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#general-sdk-configuration) and [Service Resource Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/resource/#service).
215+
216+
To query trace data for a given service name, query for the `cloud_roleName` property.
217+
218+
```sql
219+
| where cloud_RoleName == "service_name"
220+
```
221+
222+
## Enable tracing for Langchain
223+
224+
You can enable tracing for Langchain that follows OpenTelemetry standards as per [opentelemetry-instrumentation-langchain](https://pypi.org/project/opentelemetry-instrumentation-langchain/). To enable tracing for Langchain, install the package `opentelemetry-instrumentation-langchain` using your package manager, like pip:
225+
226+
```bash
227+
pip install opentelemetry-instrumentation-langchain
228+
```
229+
230+
Once necessary packages are installed, you can easily begin to [Instrument tracing in your code](#instrument-tracing-in-your-code).
231+
232+
## View traces in Azure AI Foundry portal
233+
234+
In your project, go to `Tracing` to filter your traces as you see fit.
235+
236+
By selecting a trace, you can step through each span and identify issues while observing how your application is responding. This can help you debug and pinpoint issues in your application.
237+
238+
## View traces in Azure Monitor
239+
240+
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open in Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
241+
242+
For more information on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
243+
244+
## Related content
245+
246+
- [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
247+
- [Sample Agents with Console tracing](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_async_with_console_tracing.py)
248+
- [Sample Agents with Azure Monitor](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_telemetry/sample_agents_basics_with_azure_monitor_tracing.py)
249+
- [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples/v1-beta/typescript/src) containing fully runnable JavaScript code for tracing using synchronous and asynchronous clients.
250+
- [C# Samples](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Inference_1.0.0-beta.2/sdk/ai/Azure.AI.Inference/samples/Sample8_ChatCompletionsWithOpenTelemetry.md) containing fully runnable C# code for doing inference using synchronous and asynchronous methods.

0 commit comments

Comments
 (0)