You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-application.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ ms.topic: how-to
15
15
16
16
Tracing provides deep visibility into execution of your application by capturing detailed telemetry at each execution step. Such helps diagnose issues and enhance performance by identifying problems such as inaccurate tool calls, misleading prompts, high latency, low-quality evaluation scores, and more.
17
17
18
-
This article explains how to implement tracing for AI applications using OpenAI SDK with OpenTelemetry in Azure AI Foundry.
18
+
This article explains how to implement tracing for AI applications using **OpenAI SDK** with OpenTelemetry in Azure AI Foundry.
19
19
20
20
## Prerequisites
21
21
@@ -28,9 +28,9 @@ You need the following to complete this tutorial:
28
28
29
29
## Enable tracing in your project
30
30
31
-
Azure AI Foundry stores traces in Azure Application Insight resources using OpenTelemetry. By default, new Azure AI Foundry resources don't provision these resources. You can connect them to an existing Azure Application Insights resource or create a new one from within the project. You do such configuration once per each Azure AI Foundry resource.
31
+
Azure AI Foundry stores traces in Azure Application Insight resources using OpenTelemetry. By default, new Azure AI Foundry resources don't provision these resources. You can connect rpojects to an existing Azure Application Insights resource or create a new one from within the project. You do such configuration once per each Azure AI Foundry resource.
32
32
33
-
The following steps show how to configure:
33
+
The following steps show how to configure your resource:
34
34
35
35
1. Go to [Azure AI Foundry portal](https://ai.azure.com) and navigate to your project.
36
36
@@ -56,7 +56,7 @@ The following steps show how to configure:
56
56
57
57
3. Select **Create** to create the resource and connect it to the Azure AI Foundry resource.
58
58
59
-
4. Once the connection is configured, you are ready to use tracing in this project.
59
+
4. Once the connection is configured, you are ready to use tracing in any project within the resource.
60
60
61
61
5. Go to the landing page of your project and copy the project's endpoint URI. You need it later in the tutorial.
62
62
@@ -68,7 +68,7 @@ The following steps show how to configure:
68
68
69
69
## Instrument the OpenAI SDK
70
70
71
-
When developing using the OpenAI SDK you can instrument your code so traces are sent to Azure AI Foundry. Follow these steps:
71
+
When developing with the OpenAI SDK, you can instrument your code so traces are sent to Azure AI Foundry. Follow these steps to instrument your code:
72
72
73
73
1. Install `azure-ai-projects`, `azure-monitor-opentelemetry`, and `opentelemetry-instrumentation-openai-v2` in your environment. The following example uses `pip`:
74
74
@@ -84,7 +84,7 @@ When developing using the OpenAI SDK you can instrument your code so traces are
84
84
OpenAIInstrumentor().instrument()
85
85
```
86
86
87
-
1. Get the connection string to the Azure Application Insights resource to your project:
87
+
1. Get the connection string to the Azure Application Insights resource associated with your project:
88
88
89
89
```python
90
90
from azure.ai.projects import AIProjectClient
@@ -103,17 +103,17 @@ When developing using the OpenAI SDK you can instrument your code so traces are
103
103
>
104
104
> :::image type="content" source="../../media/how-to/develop/trace-application/tracing-copy-connection-string.png" alt-text="A screenshot showing how to copy the connection string to the underlying Azure Application Insights resource from a project." lightbox="../../media/how-to/develop/trace-application/tracing-copy-connection-string.png":::
105
105
106
-
1. Configure OpenTelemetry to send traces to the Azure Application Insights used by Azure AI Foundry:
106
+
1. Configure OpenTelemetry to send traces to the Azure Application Insights:
107
107
108
108
```python
109
109
from azure.monitor.opentelemetry import configure_azure_monitor
1. By default, OpenTelemetry doesn't capture inputs and outputs. Use the environment variable `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true` to capture it. Ensure this environment variable is configured in the environment level where your code is running.
114
+
1. By default, OpenTelemetry doesn't capture inputs and outputs. Use the environment variable `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true` to capture them. Ensure this environment variable is configured in the environment level where your code is running.
115
115
116
-
1. Use the OpenAI SDK in the same way you are used to:
116
+
1. Use the OpenAI SDK as usual:
117
117
118
118
```python
119
119
client = project_client.get_azure_openai_client()
@@ -130,15 +130,15 @@ When developing using the OpenAI SDK you can instrument your code so traces are
130
130
131
131
:::image type="content" source="../../media/how-to/develop/trace-application/tracing-display-simple.png" alt-text="A screenshot showing how a simple chat completion request is displayed in the trace." lightbox="../../media/how-to/develop/trace-application/tracing-display-simple.png":::
132
132
133
-
1. It may be useful to capture sections of your code that mixes business logic with models when developing complex applications. You can do that by first getting an instance of the current tracer.
133
+
1. It may be useful to capture sections of your code that mixes business logic with models when developing complex applications. OpenTelemetry uses the concept of spans to capture sections you're interested in. To start emmiting your own spans, get an instance of the current **tracer** object.
134
134
135
135
```python
136
136
from opentelemetry import trace
137
137
138
138
tracer = trace.get_tracer(__name__)
139
139
```
140
140
141
-
1. Then, use decorators in your method to capture specific scenarios in your code that you are interested in. The following example assesses if a list of claims with a list of contexts.
141
+
1. Then, use decorators in your method to capture specific scenarios in your code that you are interested in. Such decorators generate spans automatically. The following code example instruments a method called `assess_claims_with_context` with iterates over a list of claims and verify if the claim is supported by the context using an LLM. All the calls made in this method are captured within the same span:
@@ -171,7 +171,7 @@ When developing using the OpenAI SDK you can instrument your code so traces are
171
171
172
172
:::image type="content" source="../../media/how-to/develop/trace-application/tracing-display-decorator.png" alt-text="A screenshot showing how a method using a decorator is displayed in the trace." lightbox="../../media/how-to/develop/trace-application/tracing-display-decorator.png":::
173
173
174
-
1. You may also want to add extra information as attributes to the current span. Use the `trace` object to access it and include extra information. See how the `assess_claims_with_context` method has been modified to include an attribute:
174
+
1. You may also want to add extra information to the current span. OpenTelemetry uses the concept of **attributes** for that. Use the `trace` object to access them and include extra information. See how the `assess_claims_with_context` method has been modified to include an attribute:
0 commit comments