You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article you'll learn how to trace your application with Azure AI Inference SDK with your choice between using Python, JavaScript, or C#. The Azure AI Inference client library provides support for tracing with OpenTelemetry.
21
+
In this article, you'll learn how to trace your application with Azure AI Foundry SDK with your choice between using Python, JavaScript, or C#. This provides support for tracing with OpenTelemetry.
22
22
23
-
## Enable trace in your application
24
-
25
-
### Prerequisites
23
+
## Prerequisites
26
24
27
25
- An [Azure Subscription](https://azure.microsoft.com/).
28
26
- An Azure AI project, see [Create a project in Azure AI Foundry portal](../create-projects.md).
29
27
- An AI model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed through Azure AI Foundry.
30
28
- If using Python, you need Python 3.8 or later installed, including pip.
31
29
- If using JavaScript, the supported environments are LTS versions of Node.js.
32
30
31
+
## Tracing using Azure AI Foundry project library
32
+
33
+
# [Python](#tab/python)
34
+
35
+
The best way to get started using the Azure AI Foundry SDK is by using a project. AI projects connect together different data, assets, and services you need to build AI applications. The AI project client allows you to easily access these project components from your code by using a single connection string. First follow steps to [create an AI Project](../create-projects.md) if you don't have one already.
36
+
To enable tracing, first ensure your project has an attached Application Insights resource. Go to the **Tracing** page of your project in Azure AI Foundry portal and follow instructions to create or attach Application Insights. If one was enabled, you can get the Application Insights connection string, and observe the full execution path through Azure Monitor.
37
+
38
+
Make sure to install following packages via
39
+
40
+
```bash
41
+
pip install opentelemetry-sdk
42
+
pip install azure-core-tracing-opentelemetry
43
+
pip install azure-monitor-opentelemetry
44
+
```
45
+
46
+
Refer the following samples to get started with tracing using Azure AI Project SDK:
47
+
48
+
-[Python Sample with console tracing for Azure AI Inference Client](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_ai_inference_client_and_console_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
49
+
-[Python Sample with Azure Monitor for Azure AI Inference Client](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
50
+
-[Python Sample with console tracing for Azure OpenAI](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_openai_client_and_console_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
51
+
-[Python Sample with Azure Monitor for Azure OpenAI](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/samples/inference/sample_chat_completions_with_azure_openai_client_and_azure_monitor_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
52
+
53
+
# [JavaScript](#tab/javascript)
54
+
55
+
Tracing isn't yet integrated into the Azure AI Projects SDK for JS. For instructions on how to instrument and log traces from the Azure AI Inference package, see [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples/v1-beta/typescript/src).
56
+
57
+
# [C#](#tab/csharp)
58
+
59
+
Tracing isn't yet integrated into the Azure AI Projects SDK for C#. For instructions on how to instrument and log traces from the Azure AI Inference package, see [azure-sdk-for-dotnet](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/ai/Azure.AI.Inference/samples/Sample8_ChatCompletionsWithOpenTelemetry.md).
60
+
61
+
---
62
+
63
+
## Enable Tracing using Azure AI Inference Library
64
+
33
65
### Installation
34
66
35
67
# [Python](#tab/python)
@@ -43,7 +75,7 @@ Install the package `azure-ai-inference` using your package manager, like pip:
43
75
Install the Azure Core OpenTelemetry Tracing plugin, OpenTelemetry, and the OTLP exporter for sending telemetry to your observability backend. To install the necessary packages for Python, use the following pip commands:
44
76
45
77
```bash
46
-
pip install opentelemetry
78
+
pip install opentelemetry-sdk
47
79
48
80
pip install opentelemetry-exporter-otlp
49
81
```
@@ -72,7 +104,7 @@ To learn more Azure AI Inference SDK for C# and observability, see the [Tracing
72
104
73
105
---
74
106
75
-
To learn more, see the [Inference SDK reference](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md).
107
+
To learn more, see the [Inference SDK reference](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md).
76
108
77
109
### Configuration
78
110
@@ -81,14 +113,6 @@ To learn more , see the [Inference SDK reference](../../../ai-foundry/model-infe
81
113
You need to add following configuration settings as per your use case:
82
114
83
115
- To capture prompt and completion contents, set the `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment variable to true (case insensitive). By default, prompts, completions, function names, parameters, or outputs aren't recorded.
84
-
- To enable Azure SDK tracing, set the `AZURE_SDK_TRACING_IMPLEMENTATION` environment variable to opentelemetry. Alternatively, you can configure it in the code with the following snippet:
85
-
86
-
```python
87
-
from azure.core.settings import settings
88
-
89
-
settings.tracing_implementation ="opentelemetry"
90
-
```
91
-
92
116
To learn more, see [Azure Core Tracing OpenTelemetry client library for Python](/python/api/overview/azure/core-tracing-opentelemetry-readme).
It's also possible to uninstrument the Azure AI Inferencing API by using the uninstrument call. After this call, the traces will no longer be emitted by the Azure AI Inferencing API until instrument is called again:
158
+
It's also possible to uninstrument the Azure AI Inference API by using the uninstrument call. After this call, the traces will no longer be emitted by the Azure AI Inference API until instrument is called again:
To configure OpenTelemetry and enable Azure AI Inference tracing follow these steps:
202
+
To configure OpenTelemetry and enable Azure AI Inference tracing, follow these steps:
179
203
180
204
1.**Install OpenTelemetry Packages**: Install the following dependencies for HTTP tracing and metrics instrumentation as well as console and [OTLP](https://opentelemetry.io/docs/specs/otel/protocol/) exporters:
181
205
@@ -207,12 +231,71 @@ To configure OpenTelemetry and enable Azure AI Inference tracing follow these st
207
231
208
232
To trace your own custom functions, you can leverage OpenTelemetry, you'll need to instrument your code with the OpenTelemetry SDK. This involves setting up a tracer provider and creating spans around the code you want to trace. Each span represents a unit of work and can be nested to form a trace tree. You can add attributes to spans to enrich the trace data with additional context. Once instrumented, configure an exporter to send the trace data to a backend for analysis and visualization. For detailed instructions and advanced usage, refer to the [OpenTelemetry documentation](https://opentelemetry.io/docs/). This will help you monitor the performance of your custom functions and gain insights into their execution.
209
233
234
+
### Using service name in trace data
235
+
236
+
ToidentifyyourserviceviaauniqueIDinApplicationInsights, youcanusetheservicenameOpenTelemetrypropertyinyourtracedata. Thisisparticularlyusefulifyou'reloggingdatafrommultipleapplicationstothesameApplicationInsightsresource, andyouwanttodifferentiatebetweenthem. Forexample, letssayyouhavetwoapplications:**App-1**and**App-2**, withtracingconfiguredtologdatatothesameApplicationInsightsresource. Perhapsyou'd like to set up **App-1** to be evaluated continuously by **Relevance** and **App-2** to be evaluated continuously by **Groundedness**. You can use the service name to differentiate between the applications in your Online Evaluation configurations.
237
+
238
+
Tosetuptheservicenameproperty, youcandosodirectlyinyourapplicationcodebyfollowingthesteps, see [UsingmultipletracerproviderswithdifferentResource](https://opentelemetry.io/docs/languages/python/cookbook/#using-multiple-tracer-providers-with-different-resource). Alternatively, you can set the environment variable `OTEL_SERVICE_NAME` prior to deploying your app. To learn more about working with the service name, see [OTEL Environment Variables](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#general-sdk-configuration) and [Service Resource Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/resource/#service).
239
+
240
+
Toquerytracedataforagivenservicename, queryforthe `cloud_roleName` property. Incaseyou're leveraging Online Evaluation, add the following line to the KQL query you use within your Online Evaluation set-up:
241
+
242
+
```sql
243
+
|wherecloud_RoleName=="service_name"
244
+
```
245
+
246
+
## Enable Tracing for Langchain
247
+
248
+
# [Python](#tab/python)
249
+
250
+
YoucanenabletracingforLangchainthatfollowsOpenTelemetrystandardsasper [opentelemetry-instrumentation-langchain](https://pypi.org/project/opentelemetry-instrumentation-langchain/) To enable tracing for Langchain, follow following steps:
ToattachuserfeedbacktotracesandvisualizetheminAzureAIFoundryportalusingOpenTelemetry'ssemanticconventions, youcaninstrumentyourapplicationenablingtracingandlogginguserfeedback. BycorrelatingfeedbacktraceswiththeirrespectivechatrequesttracesusingtheresponseID, youcanuseviewandmanagethesetracesinAzureAIFoundryportal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
| `comment` |string|Additionaldetailsabouttheuserfeedback| `"I did not like it"` | `Opt-in` |
294
+
214
295
## Related content
215
296
216
297
- [Pythonsamples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
- [JavaScriptsamples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples/v1-beta/typescript/src) containing fully runnable JavaScript code for tracing using synchronous and asynchronous clients.
218
301
- [C# Samples](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Inference_1.0.0-beta.2/sdk/ai/Azure.AI.Inference/samples/Sample8_ChatCompletionsWithOpenTelemetry.md) containing fully runnable C# code for doing inference using synchronous and asynchronous methods.
This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
0 commit comments