Skip to content

Commit 92d206b

Browse files
authored
Merge pull request #5696 from sdgilley/sdg-freshness
freshness update - articles/ai-foundry/how-to/develop/langchain.md
2 parents 90a1ac8 + 8909e68 commit 92d206b

File tree

2 files changed

+25
-139
lines changed

2 files changed

+25
-139
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 25 additions & 139 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 03/11/2025
10+
ms.date: 06/24/2025
1111
ms.reviewer: fasantia
1212
ms.author: sgilley
1313
author: sdgilley
@@ -30,12 +30,13 @@ In this tutorial, you learn how to use the packages `langchain-azure-ai` to buil
3030
To run this tutorial, you need:
3131

3232
* An [Azure subscription](https://azure.microsoft.com).
33-
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large-2407` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
33+
34+
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-medium-2505` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
3435
* Python 3.9 or later installed, including pip.
3536
* LangChain installed. You can do it with:
3637

3738
```bash
38-
pip install langchain-core
39+
pip install langchain
3940
```
4041

4142
* In this example, we're working with the Model Inference API, hence we install the following packages:
@@ -63,7 +64,7 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
6364
> [!TIP]
6465
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
6566
66-
In this scenario, we placed both the endpoint URL and key in the following environment variables:
67+
In this scenario, set the endpoint URL and key as environment variables. (If the endpoint you copied includes additional text after `/models`, remove it so the URL ends at `/models` as shown below.)
6768
6869
```bash
6970
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
@@ -75,21 +76,13 @@ Once configured, create a client to connect with the chat model by using the `in
7576
```python
7677
from langchain.chat_models import init_chat_model
7778
78-
llm = init_chat_model(model="mistral-large-2407", model_provider="azure_ai")
79+
llm = init_chat_model(model="mistral-medium-2505", model_provider="azure_ai")
7980
```
8081
8182
You can also use the class `AzureAIChatCompletionsModel` directly.
8283
83-
```python
84-
import os
85-
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
84+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client)]
8685
87-
model = AzureAIChatCompletionsModel(
88-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
89-
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
90-
model="mistral-large-2407",
91-
)
92-
```
9386
9487
> [!CAUTION]
9588
> **Breaking change:** Parameter `model_name` was renamed `model` in version `0.1.3`.
@@ -104,7 +97,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
10497
model = AzureAIChatCompletionsModel(
10598
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
10699
credential=DefaultAzureCredential(),
107-
model="mistral-large-2407",
100+
model="mistral-medium-2505",
108101
)
109102
```
110103
@@ -122,7 +115,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
122115
model = AzureAIChatCompletionsModel(
123116
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
124117
credential=DefaultAzureCredentialAsync(),
125-
model="mistral-large-2407",
118+
model="mistral-medium-2505",
126119
)
127120
```
128121
@@ -142,21 +135,13 @@ model = AzureAIChatCompletionsModel(
142135

143136
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To call the model, we can pass in a list of messages to the `invoke` method.
144137
145-
```python
146-
from langchain_core.messages import HumanMessage, SystemMessage
147-
148-
messages = [
149-
SystemMessage(content="Translate the following from English into Italian"),
150-
HumanMessage(content="hi!"),
151-
]
152-
153-
model.invoke(messages)
154-
```
138+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=human_message)]
155139
156140
You can also compose operations as needed in **chains**. Let's now use a prompt template to translate sentences:
157141

158142
```python
159143
from langchain_core.output_parsers import StrOutputParser
144+
from langchain_core.prompts import ChatPromptTemplate
160145
161146
system_template = "Translate the following into {language}:"
162147
prompt_template = ChatPromptTemplate.from_messages(
@@ -166,10 +151,7 @@ prompt_template = ChatPromptTemplate.from_messages(
166151

167152
As you can see from the prompt template, this chain has a `language` and `text` input. Now, let's create an output parser:
168153
169-
```python
170-
from langchain_core.prompts import ChatPromptTemplate
171-
parser = StrOutputParser()
172-
```
154+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_output_parser)]
173155
174156
We can now combine the template, model, and the output parser from above using the pipe (`|`) operator:
175157
@@ -183,65 +165,27 @@ To invoke the chain, identify the inputs required and provide values using the `
183165
chain.invoke({"language": "italian", "text": "hi"})
184166
```
185167
186-
```output
187-
'ciao'
188-
```
189-
190168
### Chaining multiple LLMs together
191169
192170
Models deployed to Azure AI Foundry support the Model Inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
193171
194-
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Model Inference API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
195-
196-
```python
197-
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
172+
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Medium` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
198173
199-
producer = AzureAIChatCompletionsModel(
200-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
201-
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
202-
model="mistral-large-2407",
203-
)
174+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_producer_verifier)]
204175
205-
verifier = AzureAIChatCompletionsModel(
206-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
207-
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
208-
model="mistral-small",
209-
)
210-
```
211176
212177
> [!TIP]
213178
> Explore the model card of each of the models to understand the best use cases for each model.
214179
215180
The following example generates a poem written by an urban poet:
216181
217-
```python
218-
from langchain_core.prompts import PromptTemplate
219-
220-
producer_template = PromptTemplate(
221-
template="You are an urban poet, your job is to come up \
222-
verses based on a given topic.\n\
223-
Here is the topic you have been asked to generate a verse on:\n\
224-
{topic}",
225-
input_variables=["topic"],
226-
)
227-
228-
verifier_template = PromptTemplate(
229-
template="You are a verifier of poems, you are tasked\
230-
to inspect the verses of poem. If they consist of violence and abusive language\
231-
report it. Your response should be only one word either True or False.\n \
232-
Here is the lyrics submitted to you:\n\
233-
{input}",
234-
input_variables=["input"],
235-
)
236-
```
182+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=generate_poem)]
237183
238184
Now let's chain the pieces:
239185

240-
```python
241-
chain = producer_template | producer | parser | verifier_template | verifier | parser
242-
```
186+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_chain)]
243187

244-
The previous chain returns the output of the step `verifier` only. Since we want to access the intermediate result generated by the `producer`, in LangChain you need to use a `RunnablePassthrough` object to also output that intermediate step. The following code shows how to do it:
188+
The previous chain returns the output of the step `verifier` only. Since we want to access the intermediate result generated by the `producer`, in LangChain you need to use a `RunnablePassthrough` object to also output that intermediate step.
245189

246190
```python
247191
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
@@ -254,16 +198,8 @@ chain = generate_poem | RunnableParallel(poem=RunnablePassthrough(), verificatio
254198

255199
To invoke the chain, identify the inputs required and provide values using the `invoke` method:
256200

257-
```python
258-
chain.invoke({"topic": "living in a foreign country"})
259-
```
201+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=invoke_chain)]
260202

261-
```output
262-
{
263-
"peom": "...",
264-
"verification: "false"
265-
}
266-
```
267203

268204
## Use embeddings models
269205

@@ -276,43 +212,21 @@ export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
276212
277213
Then create the client:
278214
279-
```python
280-
from langchain_azure_ai.embeddings import AzureAIEmbeddingsModel
281-
282-
embed_model = AzureAIEmbeddingsModel(
283-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
284-
credential=os.environ['AZURE_INFERENCE_CREDENTIAL'],
285-
model="text-embedding-3-large",
286-
)
287-
```
215+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=create_embed_model_client)]
288216
289217
The following example shows a simple example using a vector store in memory:
290218
291-
```python
292-
from langchain_core.vectorstores import InMemoryVectorStore
219+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=create_vector_store)]
293220
294-
vector_store = InMemoryVectorStore(embed_model)
295-
```
296221
297222
Let's add some documents:
298223

299-
```python
300-
from langchain_core.documents import Document
301-
302-
document_1 = Document(id="1", page_content="foo", metadata={"baz": "bar"})
303-
document_2 = Document(id="2", page_content="thud", metadata={"bar": "baz"})
224+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=add_documents)]
304225

305-
documents = [document_1, document_2]
306-
vector_store.add_documents(documents=documents)
307-
```
308226

309227
Let's search by similarity:
310228
311-
```python
312-
results = vector_store.similarity_search(query="thud",k=1)
313-
for doc in results:
314-
print(f"* {doc.page_content} [{doc.metadata}]")
315-
```
229+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=search_similarity)]
316230
317231
## Using Azure OpenAI models
318232
@@ -334,41 +248,13 @@ If you need to debug your application and understand the requests sent to the mo
334248

335249
First, configure logging to the level you are interested in:
336250

337-
```python
338-
import sys
339-
import logging
340-
341-
# Acquire the logger for this client library. Use 'azure' to affect both
342-
# 'azure.core` and `azure.ai.inference' libraries.
343-
logger = logging.getLogger("azure")
344-
345-
# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
346-
logger.setLevel(logging.DEBUG)
251+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=configure_logging)]
347252

348-
# Direct logging output to stdout:
349-
handler = logging.StreamHandler(stream=sys.stdout)
350-
# Or direct logging output to a file:
351-
# handler = logging.FileHandler(filename="sample.log")
352-
logger.addHandler(handler)
353-
354-
# Optional: change the default logging format. Here we add a timestamp.
355-
formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s")
356-
handler.setFormatter(formatter)
357-
```
358253

359254
To see the payloads of the requests, when instantiating the client, pass the argument `logging_enable`=`True` to the `client_kwargs`:
360255

361-
```python
362-
import os
363-
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
256+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client_with_logging)]
364257

365-
model = AzureAIChatCompletionsModel(
366-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
367-
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
368-
model="mistral-large-2407",
369-
client_kwargs={"logging_enable": True},
370-
)
371-
```
372258

373259
Use the client as usual in your code.
374260

@@ -396,7 +282,7 @@ You can configure your application to send telemetry to Azure Application Insigh
396282
application_insights_connection_string = "instrumentation...."
397283
```
398284

399-
2. Using the Azure AI Foundry SDK and the project connection string.
285+
2. Using the Azure AI Foundry SDK and the project connection string (**[!INCLUDE [hub-project-name](../../includes/hub-project-name.md)]s only**).
400286

401287
1. Ensure you have the package `azure-ai-projects` installed in your environment.
402288

-13.6 KB
Loading

0 commit comments

Comments
 (0)