You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Models deployed to Azure AI Foundry support the Azure AI model inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
182
184
183
-
In the following example, we create 2 model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../ai-services/model-inference.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
185
+
In the following example, we create two model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../ai-services/model-inference.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
184
186
185
187
```python
188
+
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
The previous chain returns the output of the step `verifier` only. Since we want to access the intermediate result generated by the `producer`, in LangChain you need to use a `RunnablePassthrough` object to also output that intermediate step. The following code shows how to do it:
236
+
237
+
```python
238
+
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
If you are using Azure OpenAI service or Azure AI model inference service with OpenAI models with `langchain-azure-ai` package, you may need to use `api_version` parameter to selecta specific API version. The following example shows how to connect to an Azure OpenAI model deployment in Azure OpenAI service:
287
311
288
312
```python
289
-
from langchain_azure_ai import AzureAIChatCompletionsModel
313
+
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
If you need to debug your application and understand the requests sent to the models in Azure AI Foundry, you can use the debug capabilities of the integration as follows:
341
+
342
+
First, configure logging to the level you are interested in:
343
+
344
+
```python
345
+
import sys
346
+
import logging
347
+
348
+
# Acquire the logger for this client library. Use 'azure' to affect both
349
+
# 'azure.core` and `azure.ai.inference' libraries.
350
+
logger = logging.getLogger("azure")
351
+
352
+
# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
You can use the tracing capabilities in Azure AI Foundry by creating a tracer. Logs are stored in Azure Application Insights and can be queried at any time using Azure Monitor or Azure AI Foundry portal. Each AI Hub has an Azure Application Insights associated with it.
385
+
386
+
### Get your instrumentation connection string
387
+
388
+
You can configure your application to send telemetry to Azure Application Insights either by:
389
+
390
+
1. Using the connection string to Azure Application Insights directly:
391
+
392
+
1. Go to [Azure AI Foundry portal](https://ai.azure.com) and select**Tracing**.
393
+
394
+
2. Select **Manage data source**. In this screen you can see the instance that is associated with the project.
395
+
396
+
3. Copy the value at **Connection string** and set it to the following variable:
The following code creates a tracer connected to the Azure Application Insights behind a project in Azure AI Foundry. Notice that the parameter `enable_content_recording` is set to `True`. This enables the capture of the inputs and outputs of the entire application as well as the intermediate steps. Such is helpful when debugging and building applications, but you may want to disable it on production environments. It defaults to the environment variable `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED`:
427
+
428
+
```python
429
+
from langchain_azure_ai.callbacks.tracers import AzureAIInferenceTracer
chain.invoke({"topic": "living in a foreign country"})
453
+
```
454
+
455
+
### View traces
456
+
457
+
To see traces:
458
+
459
+
1. Go to [Azure AI Foundry portal](https://ai.azure.com).
460
+
461
+
2. Navigate to **Tracing** section.
462
+
463
+
3. Identify the trace you have created. It may take a couple of seconds for the trace to show.
464
+
465
+
:::image type="content" source="../../media/how-to/develop-langchain/langchain-portal-tracing-example.png" alt-text="A screenshot showing the trace of a chain." lightbox="../../media/how-to/develop-langchain/langchain-portal-tracing-example.png":::
466
+
467
+
Learn more about [how to visualize and manage traces](visualize-traces.md).
468
+
314
469
## Next steps
315
470
316
471
* [Develop applications with LlamaIndex](llama-index.md)
472
+
* [Visualize and manage traces in Azure AI Foundry](visualize-traces.md)
317
473
* [Use the Azure AI model inference service](../../ai-services/model-inference.md)
318
474
* [Reference: Azure AI model inference API](../../reference/reference-model-inference-api.md)
0 commit comments