Skip to content

Commit 4a55643

Browse files
Merge pull request #7529 from nagkumar91/assistant/new-branch
docs: refresh LangChain tutorial and tracing sample
2 parents 1b3f232 + 02deabf commit 4a55643

File tree

1 file changed

+44
-44
lines changed

1 file changed

+44
-44
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 44 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Develop application with LangChain and Azure AI Foundry
2+
title: Develop applications with LangChain and Azure AI Foundry
33
titleSuffix: Azure AI Foundry
4-
description: This article explains how to use LangChain with models deployed in Azure AI Foundry portal to build advance intelligent applications.
4+
description: Learn how to use LangChain with models deployed in Azure AI Foundry to build advanced, intelligent applications.
55
ms.service: azure-ai-foundry
66
ms.custom:
77
- ignite-2024
@@ -15,31 +15,31 @@ author: sdgilley
1515

1616
# Develop applications with LangChain and Azure AI Foundry
1717

18-
LangChain is a development ecosystem that makes as easy possible for developers to build applications that reason. The ecosystem is composed by multiple components. Most of the them can be used by themselves, allowing you to pick and choose whichever components you like best.
18+
LangChain is a developer ecosystem that makes it easier to build reasoning applications. It includes multiple components, and most of them can be used independently, allowing you to pick and choose the pieces you need.
1919

2020
Models deployed to [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) can be used with LangChain in two ways:
2121

22-
- **Using the Azure AI Model Inference API:** All models deployed to Azure AI Foundry support the [Model Inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LangChain, install the extensions `langchain-azure-ai`.
22+
- **Use the Azure AI Model Inference API:** All models deployed in Azure AI Foundry support the [Model Inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md), which offers a common set of capabilities across most models in the catalog. Because the API is consistent, switching models is as simple as changing the deployment; no code changes are required. With LangChain, install the `langchain-azure-ai` integration.
2323

24-
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LangChain. Those extensions might include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with LangChain, install the extension specific for the model you want to use, like `langchain-openai` or `langchain-cohere`.
24+
- **Use the model provider’s API:** Some models, such as OpenAI, Cohere, or Mistral, offer their own APIs and LangChain extensions. These extensions might include model-specific capabilities and are suitable if you need to use them. Install the extension for your chosen model, such as `langchain-openai` or `langchain-cohere`.
2525

26-
In this tutorial, you learn how to use the packages `langchain-azure-ai` to build applications with LangChain.
26+
This tutorial shows how to use the `langchain-azure-ai` package with LangChain.
2727

2828
## Prerequisites
2929

3030
To run this tutorial, you need:
3131

3232
* [!INCLUDE [azure-subscription](../../includes/azure-subscription.md)]
3333

34-
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large-2411` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
34+
* A model deployment that supports the [Model Inference API](https://aka.ms/azureai/modelinference). This article uses a `Mistral-Large-2411` deployment available in the [Azure AI Foundry model catalog](../../../ai-foundry/model-inference/overview.md).
3535
* Python 3.9 or later installed, including pip.
36-
* LangChain installed. You can do it with:
36+
* LangChain installed. You can install it with:
3737

3838
```bash
3939
pip install langchain
4040
```
4141

42-
* In this example, we're working with the Model Inference API, hence we install the following packages:
42+
* Install the Azure AI Foundry integration:
4343

4444
```bash
4545
pip install -U langchain-azure-ai
@@ -49,7 +49,7 @@ To run this tutorial, you need:
4949

5050
[!INCLUDE [set-endpoint](../../includes/set-endpoint.md)]
5151

52-
Once configured, create a client to connect with the chat model by using the `init_chat_model`. For Azure OpenAI models, configure the client as indicated at [Using Azure OpenAI models](#using-azure-openai-models).
52+
After configuration, create a client to connect to the chat model using `init_chat_model`. For Azure OpenAI models, see [Use Azure OpenAI models](#use-azure-openai-models).
5353

5454
```python
5555
from langchain.chat_models import init_chat_model
@@ -82,7 +82,7 @@ model = AzureAIChatCompletionsModel(
8282
> [!NOTE]
8383
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
8484

85-
If you're planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
85+
If you plan to use asynchronous calls, use the asynchronous version of the credentials:
8686

8787
```python
8888
from azure.identity.aio import (
@@ -97,7 +97,7 @@ model = AzureAIChatCompletionsModel(
9797
)
9898
```
9999

100-
If your endpoint is serving one model, like with the serverless API deployment, you don't have to indicate `model` parameter:
100+
If your endpoint serves a single model (for example, serverless API deployments), omit the `model` parameter:
101101

102102
```python
103103
import os
@@ -109,13 +109,13 @@ model = AzureAIChatCompletionsModel(
109109
)
110110
```
111111

112-
## Use chat completions models
112+
## Use chat completion models
113113

114-
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To call the model, we can pass in a list of messages to the `invoke` method.
114+
Use the model directly. `ChatModels` are instances of the LangChain `Runnable` interface, which provides a standard way to interact with them. To call the model, pass a list of messages to the `invoke` method.
115115

116116
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=human_message)]
117117

118-
You can also compose operations as needed in **chains**. Let's now use a prompt template to translate sentences:
118+
Compose operations as needed in chains. Use a prompt template to translate sentences:
119119

120120
```python
121121
from langchain_core.output_parsers import StrOutputParser
@@ -127,43 +127,43 @@ prompt_template = ChatPromptTemplate.from_messages(
127127
)
128128
```
129129

130-
As you can see from the prompt template, this chain has a `language` and `text` input. Now, let's create an output parser:
130+
This chain takes `language` and `text` inputs. Now, create an output parser:
131131

132132
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_output_parser)]
133133

134-
We can now combine the template, model, and the output parser from above using the pipe (`|`) operator:
134+
Combine the template, model, and output parser using the pipe (`|`) operator:
135135

136136
```python
137137
chain = prompt_template | model | parser
138138
```
139139

140-
To invoke the chain, identify the inputs required and provide values using the `invoke` method:
140+
Invoke the chain by providing `language` and `text` values using the `invoke` method:
141141

142142
```python
143143
chain.invoke({"language": "italian", "text": "hi"})
144144
```
145145

146-
### Chaining multiple LLMs together
146+
### Chain multiple LLMs together
147147

148-
Models deployed to Azure AI Foundry support the Model Inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
148+
Because models in Azure AI Foundry expose a common Model Inference API, you can chain multiple LLM operations and choose the model best suited to each step.
149149

150-
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
150+
In the following example, we create two model clients: one producer and one verifier. To make the distinction clear, we use a multi-model endpoint such as the [Model Inference API](../../model-inference/overview.md) and pass the `model` parameter to use `Mistral-Large` for generation and `Mistral-Small` for verification. Producing content generally requires a larger model, while verification can use a smaller one.
151151

152152
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_producer_verifier)]
153153

154154

155155
> [!TIP]
156-
> Explore the model card of each of the models to understand the best use cases for each model.
156+
> Review the model card for each model to understand the best use cases.
157157

158158
The following example generates a poem written by an urban poet:
159159

160160
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=generate_poem)]
161161

162-
Now let's chain the pieces:
162+
Chain the pieces:
163163

164164
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_chain)]
165165

166-
The previous chain returns the output of the step `verifier` only. Since we want to access the intermediate result generated by the `producer`, in LangChain you need to use a `RunnablePassthrough` object to also output that intermediate step.
166+
The previous chain returns only the output of the `verifier` step. To access the intermediate result generated by the `producer`, use a `RunnablePassthrough` to output that intermediate step.
167167

168168
```python
169169
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
@@ -174,41 +174,41 @@ verify_poem = verifier_template | verifier | parser
174174
chain = generate_poem | RunnableParallel(poem=RunnablePassthrough(), verification=RunnablePassthrough() | verify_poem)
175175
```
176176

177-
To invoke the chain, identify the inputs required and provide values using the `invoke` method:
177+
Invoke the chain using the `invoke` method:
178178

179179
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=invoke_chain)]
180180

181181

182-
## Use embeddings models
182+
## Use embedding models
183183

184-
In the same way, you create an LLM client, you can connect to an embeddings model. In the following example, we're setting the environment variable to now point to an embeddings model:
184+
Create an embeddings client similarly. Set the environment variables to point to an embeddings model:
185185

186186
```bash
187187
export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
188188
export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
189189
```
190190

191-
Then create the client:
191+
Create the client:
192192

193193
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=create_embed_model_client)]
194194

195-
The following example shows a simple example using a vector store in memory:
195+
Use an in-memory vector store:
196196

197197
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=create_vector_store)]
198198

199199

200-
Let's add some documents:
200+
Add documents:
201201

202202
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=add_documents)]
203203

204204

205-
Let's search by similarity:
205+
Search by similarity:
206206

207207
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=search_similarity)]
208208

209-
## Using Azure OpenAI models
209+
## Use Azure OpenAI models
210210

211-
If you're using Azure OpenAI models with `langchain-azure-ai` package, use the following URL:
211+
When using Azure OpenAI models with the `langchain-azure-ai` package, use the following endpoint format:
212212

213213
```python
214214
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -222,14 +222,14 @@ llm = AzureAIChatCompletionsModel(
222222

223223
## Debugging and troubleshooting
224224

225-
If you need to debug your application and understand the requests sent to the models in Azure AI Foundry, you can use the debug capabilities of the integration as follows:
225+
If you need to debug your application and understand the requests sent to models in Azure AI Foundry, use the integration’s debug capabilities:
226226

227-
First, configure logging to the level you are interested in:
227+
First, configure logging to the desired level:
228228

229229
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=configure_logging)]
230230

231231

232-
To see the payloads of the requests, when instantiating the client, pass the argument `logging_enable`=`True` to the `client_kwargs`:
232+
To see request payloads, pass `logging_enable=True` in `client_kwargs` when instantiating the client:
233233

234234
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client_with_logging)]
235235

@@ -238,7 +238,7 @@ Use the client as usual in your code.
238238

239239
## Tracing
240240

241-
You can use the tracing capabilities in Azure AI Foundry by creating a tracer. Logs are stored in Azure Application Insights and can be queried at any time using Azure Monitor or Azure AI Foundry portal. Each AI Hub has an Azure Application Insights associated with it.
241+
Use tracing in Azure AI Foundry by creating a tracer. Logs are stored in Azure Application Insights and can be queried at any time using Azure Monitor or the Azure AI Foundry portal. Each AI hub has an associated Azure Application Insights instance.
242242

243243
### Get your instrumentation connection string
244244

@@ -282,27 +282,27 @@ You can configure your application to send telemetry to Azure Application Insigh
282282

283283
### Configure tracing for Azure AI Foundry
284284

285-
The following code creates a tracer connected to the Azure Application Insights behind a project in Azure AI Foundry. Notice that the parameter `enable_content_recording` is set to `True`. This enables the capture of the inputs and outputs of the entire application as well as the intermediate steps. Such is helpful when debugging and building applications, but you might want to disable it on production environments. It defaults to the environment variable `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED`:
285+
The following code creates a tracer connected to the Azure Application Insights behind an Azure AI Foundry project. The `enable_content_recording` parameter is set to `True`, which captures inputs and outputs across the application, including intermediate steps. This is helpful when debugging and building applications, but you might want to disable it in production environments. You can also control this via the `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment variable:
286286

287287
```python
288-
from langchain_azure_ai.callbacks.tracers import AzureAIInferenceTracer
288+
from langchain_azure_ai.callbacks.tracers import AzureAIOpenTelemetryTracer
289289
290-
tracer = AzureAIInferenceTracer(
290+
azure_tracer = AzureAIOpenTelemetryTracer(
291291
connection_string=application_insights_connection_string,
292292
enable_content_recording=True,
293293
)
294294
```
295295

296-
To configure tracing with your chain, indicate the value config in the `invoke` operation as a callback:
296+
Pass the tracer via `config` in the `invoke` operation:
297297

298298
```python
299-
chain.invoke({"topic": "living in a foreign country"}, config={"callbacks": [tracer]})
299+
chain.invoke({"topic": "living in a foreign country"}, config={"callbacks": [azure_tracer]})
300300
```
301301

302302
To configure the chain itself for tracing, use the `.with_config()` method:
303303
304304
```python
305-
chain = chain.with_config({"callbacks": [tracer]})
305+
chain = chain.with_config({"callbacks": [azure_tracer]})
306306
```
307307
308308
Then use the `invoke()` method as usual:
@@ -319,7 +319,7 @@ To see traces:
319319
320320
2. Navigate to **Tracing** section.
321321
322-
3. Identify the trace you have created. It may take a couple of seconds for the trace to show.
322+
3. Identify the trace you created. It may take a few seconds to appear.
323323
324324
:::image type="content" source="../../media/how-to/develop-langchain/langchain-portal-tracing-example.png" alt-text="A screenshot showing the trace of a chain." lightbox="../../media/how-to/develop-langchain/langchain-portal-tracing-example.png":::
325325

0 commit comments

Comments
 (0)