You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-application.md
+67-52Lines changed: 67 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,38 +57,37 @@ To view traces in Azure AI Foundry, you need to connect an Application Insights
57
57
To trace the content of chat messages, set the `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment variable to true (case insensitive). Keep in mind this might contain personal data. To learn more, see [Azure Core Tracing OpenTelemetry client library for Python](/python/api/overview/azure/core-tracing-opentelemetry-readme).
58
58
59
59
```python
60
-
from opentelemetry import trace
61
-
from azure.monitor.opentelemetry import configure_azure_monitor
62
-
from azure.ai.projects import AIProjectClient
63
-
from azure.identity import DefaultAzureCredential
64
60
import os
65
-
66
61
os.environ["AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED"] ="true"# False by default
62
+
```
63
+
Let's begin instrumenting our agent with OpenTelemetry tracing, by starting off with authenticating and connecting to your Azure AI Project using the `AIProjectClient`.
Retrieve the connection string from the Application Insights resource connected to your project and set up the OTLP exporters to send telemetry into Azure Monitor.
74
+
Next, retrieve the connection string from the Application Insights resource connected to your project and set up the OTLP exporters to send telemetry into Azure Monitor.
77
75
78
76
```python
77
+
from azure.monitor.opentelemetry import configure_azure_monitor
Start collecting telemetry and send to your project's connected Application Insights resource.
87
+
Now, trace your code where you create and execute your agent and user message in your Azure AI Project, so you can see detailed steps for troubleshooting or monitoring.
89
88
90
89
```python
91
-
# Start tracing
90
+
from opentelemetry import trace
92
91
tracer = trace.get_tracer(__name__)
93
92
94
93
with tracer.start_as_current_span("example-tracing"):
@@ -104,23 +103,59 @@ with tracer.start_as_current_span("example-tracing"):
104
103
run = project_client.agents.create_run(thread_id=thread.id, agent_id=agent.id)
105
104
```
106
105
107
-
### Log to a local OTLP endpoint
106
+
After running your agent, you can go begin to [view traces in Azure AI Foundry Portal](#view-traces-in-azure-ai-foundry-portal).
107
+
108
+
### Log traces locally
108
109
109
-
To connect to Aspire Dashboard or another OpenTelemetry compatible backend, install the OpenTelemetry Protocol (OTLP) exporter. This enables you to print traces to the console or use a local viewer such as Aspire Dashboard.
110
+
To connect to [Aspire Dashboard](https://aspiredashboard.com/#start) or another OpenTelemetry compatible backend, install the OpenTelemetry Protocol (OTLP) exporter. This enables you to print traces to the console or use a local viewer such as Aspire Dashboard.
with tracer.start_as_current_span("example-tracing"):
146
+
agent = project_client.agents.create_agent(
147
+
model=os.environ["MODEL_DEPLOYMENT_NAME"],
148
+
name="my-assistant",
149
+
instructions="You are a helpful assistant"
150
+
)
151
+
thread = project_client.agents.create_thread()
152
+
message = project_client.agents.create_message(
153
+
thread_id=thread.id, role="user", content="Tell me a joke"
154
+
)
155
+
run = project_client.agents.create_run(thread_id=thread.id, agent_id=agent.id)
156
+
```
157
+
158
+
## Trace custom functions
124
159
125
160
To trace your custom functions, use the OpenTelemetry SDK to instrument your code.
126
161
@@ -149,34 +184,26 @@ custom_function()
149
184
150
185
For detailed instructions and advanced usage, refer to the [OpenTelemetry documentation](https://opentelemetry.io/docs/).
151
186
152
-
###Attach user feedback to traces
187
+
## Attach user feedback to traces
153
188
154
-
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions. By correlating feedback traces with their respective chat request traces using the response ID, you can view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
189
+
To attach user feedback to traces and visualize it in the Azure AI Foundry portal, you can instrument your application to enable tracing and log user feedback using OpenTelemetry's semantic conventions.
155
190
156
-
To log user feedback, follow this format:
157
191
158
-
The user feedback evaluation event can be captured if and only if the user provided a reaction to the GenAI model response. It SHOULD, when possible, be parented to the GenAI span describing such response.
159
192
160
-
<!-- prettier-ignore-start -->
161
-
<!-- markdownlint-capture -->
162
-
<!-- markdownlint-disable -->
163
-
The event name MUST be `gen_ai.evaluation.user_feedback`.
193
+
By correlating feedback traces with their respective chat request traces using the response ID or thread ID, you can view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
|`gen_ai.response.id`| string | The unique identifier for the completion. |`chatcmpl-123`|`Required`||
168
-
|`gen_ai.evaluation.score`| double | Quantified score calculated based on the user reaction in [-1.0, 1.0] range with 0 representing a neutral reaction. |`0.42`|`Recommended`||
197
+
The user feedback evaluation event can be captured if and only if the user provided a reaction to the GenAI model response. It SHOULD, when possible, be parented to the GenAI span describing such response.
169
198
170
-
<!-- markdownlint-restore -->
171
-
<!-- prettier-ignore-end -->
172
199
173
200
The user feedback event body has the following structure:
174
201
175
202
| Body Field | Type | Description | Examples | Requirement Level |
176
203
|---|---|---|---|---|
177
204
|`comment`| string | Additional details about the user feedback |`"I did not like it"`|`Opt-in`|
178
205
179
-
###Using service name in trace data
206
+
## Using service name in trace data
180
207
181
208
To identify your service via a unique ID in Application Insights, you can use the service name OpenTelemetry property in your trace data. This is useful if you're logging data from multiple applications to the same Application Insights resource, and you want to differentiate between them.
182
209
@@ -190,7 +217,7 @@ To query trace data for a given service name, query for the `cloud_roleName` pro
190
217
| where cloud_RoleName =="service_name"
191
218
```
192
219
193
-
## Enable Tracing for Langchain
220
+
## Enable tracing for Langchain
194
221
195
222
You can enable tracing for Langchain that follows OpenTelemetry standards as per [opentelemetry-instrumentation-langchain](https://pypi.org/project/opentelemetry-instrumentation-langchain/). To enable tracing for Langchain, install the package `opentelemetry-instrumentation-langchain` using your package manager, like pip:
Once necessary packages are installed, you can easily begin to [Instrument tracing in your code](#instrument-tracing-in-your-code).
202
229
203
-
## Visualize your traces
204
-
205
-
### View your traces for local debugging
206
-
207
-
#### Prompty
208
-
209
-
Using Prompty, you can trace your application with **Open Telemetry**, which offers enhanced visibility and simplified troubleshooting for LLM-based applications. This method adheres to the OpenTelemetry specification, enabling the capture and visualization of an AI application's internal execution details, which improves debugging and enhances the development process. To learn more, see [Debugging Prompty](https://prompty.ai/docs/getting-started/debugging-prompty).
210
-
211
-
#### Aspire Dashboard
212
-
213
-
Aspire Dashboard is a free & open-source OpenTelemetry dashboard for deep insights into your apps on your local development machine. To learn more, see [Aspire Dashboard](https://aspiredashboard.com/#start).
214
-
215
-
### Debugging with traces in Azure AI Foundry portal
230
+
## View traces in Azure AI Foundry portal
216
231
217
232
In your project, go to `Tracing` to filter your traces as you see fit.
218
233
219
-
By selecting a trace, I can step through each span and identify issues while observing how my application is responding.
234
+
By selecting a trace, you can step through each span and identify issues while observing how your application is responding. This can help you debug and pinpoint issues in your application.
220
235
221
-
###View traces in Azure Monitor
236
+
## View traces in Azure Monitor
222
237
223
238
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open in Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
Copy file name to clipboardExpand all lines: articles/ai-foundry/tutorials/screen-reader.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@ The chat session pane is where you can chat to the model and test out your assis
89
89
90
90
Azure AI Foundry has two different project types - see [What is Azure AI Foundry?](../what-is-azure-ai-foundry.md#project-types). The type appears in the **Type** column in the **All resources** view. In the recent resources picker, the type is in a second line under the project name.
91
91
92
-
- Listen for **(AI Services)** for a [!INCLUDE [fdp-project-name](../includes/fdp-project-name.md)].
92
+
- Listen for either **(AI Foundry)** or **Foundry project** for a [!INCLUDE [fdp-project-name](../includes/fdp-project-name.md)].
93
93
- Listen for **(Hub)** for a [!INCLUDE [hub-project-name](../includes/hub-project-name.md)].
@@ -70,7 +69,7 @@ The left pane is organized around your goals. Generally, as you develop with Azu
70
69
***Define and explore**. In this stage you define your project goals, and then explore and test models and services against your use case to find the ones that enable you to achieve your goals.
71
70
***Build and customize**. In this stage, you're actively building solutions and applications with the models, tools, and capabilities you selected. You can also customize models to perform better for your use case by fine-tuning, grounding in your data, and more. Building and customizing might be something you choose to do in the Azure AI Foundry portal, or through code and the Azure AI Foundry SDKs. Either way, a project provides you with everything you need.
72
71
* Once you're actively developing in your project, the **Overview** page shows the things you want easy access to, like your endpoints and keys.
73
-
***Assess and improve**. In this stage, you're looking for where you can improve your application's performance. You might choose to use tools like tracing to debug your application or compare evaluations to hone in on how you want your application to behave. You can also integrate with safety & security systems so you can be confident when you take your application to production.
72
+
***Observe and improve**. In this stage, you're looking for where you can improve your application's performance. You might choose to use tools like tracing to debug your application or compare evaluations to hone in on how you want your application to behave. You can also integrate with safety & security systems so you can be confident when you take your application to production.
74
73
75
74
If you're an admin, or leading a development team, and need to manage the team's resources, project access, quota, and more, you can do that in the Management Center.
76
75
@@ -110,13 +109,10 @@ Azure AI Foundry is available in most regions where Azure AI services are availa
110
109
111
110
You can [explore Azure AI Foundry portal (including the model catalog)](./how-to/model-catalog-overview.md) without signing in.
112
111
113
-
But for full functionality there are some requirements:
114
-
115
-
You need an [Azure account](https://azure.microsoft.com/pricing/purchase-options/azure-account).
112
+
But for full functionality, you need an [Azure account](https://azure.microsoft.com/pricing/purchase-options/azure-account).
116
113
117
114
## Related content
118
115
119
-
-[Quickstart: Use the chat playground in Azure AI Foundry portal](quickstarts/get-started-playground.md)
120
-
-[Build a custom chat app in Python using the Azure AI SDK](quickstarts/get-started-code.md)
116
+
-[Quickstart: Get started with Azure AI Foundry](quickstarts/get-started-code.md)
121
117
-[Create a project](./how-to/create-projects.md)
122
118
-[Get started with an AI template](how-to/develop/ai-template-get-started.md)
0 commit comments