Skip to content

Commit 7a50a25

Browse files
committed
freshness update
1 parent dbb78ef commit 7a50a25

File tree

2 files changed

+14
-10
lines changed

2 files changed

+14
-10
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 03/11/2025
10+
ms.date: 06/24/2025
1111
ms.reviewer: fasantia
1212
ms.author: sgilley
1313
author: sdgilley
@@ -30,12 +30,12 @@ In this tutorial, you learn how to use the packages `langchain-azure-ai` to buil
3030
To run this tutorial, you need:
3131

3232
* An [Azure subscription](https://azure.microsoft.com).
33-
* A model deployment supporting the [Foundry Models API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large-2407` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
33+
* A model deployment supporting the [Foundry Models API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `mistral-medium-2505` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
3434
* Python 3.9 or later installed, including pip.
3535
* LangChain installed. You can do it with:
3636

3737
```bash
38-
pip install langchain-core
38+
pip install langchain
3939
```
4040

4141
* In this example, we're working with the Foundry Models API, hence we install the following packages:
@@ -63,7 +63,10 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
6363
> [!TIP]
6464
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
6565
66-
In this scenario, we placed both the endpoint URL and key in the following environment variables:
66+
In this scenario, we placed both the endpoint URL and key in the following environment variables.
67+
68+
> [!TIP]
69+
> The endpoint you copied might have extra text after /models. Delete that and stop ad /models as shown here.
6770
6871
```bash
6972
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
@@ -75,7 +78,7 @@ Once configured, create a client to connect with the chat model by using the `in
7578
```python
7679
from langchain.chat_models import init_chat_model
7780
78-
llm = init_chat_model(model="mistral-large-2407", model_provider="azure_ai")
81+
llm = init_chat_model(model="mistral-medium-2505", model_provider="azure_ai")
7982
```
8083
8184
You can also use the class `AzureAIChatCompletionsModel` directly.
@@ -87,7 +90,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
8790
model = AzureAIChatCompletionsModel(
8891
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
8992
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
90-
model="mistral-large-2407",
93+
model="mistral-medium-2505",
9194
)
9295
```
9396
@@ -104,7 +107,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
104107
model = AzureAIChatCompletionsModel(
105108
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
106109
credential=DefaultAzureCredential(),
107-
model="mistral-large-2407",
110+
model="mistral-medium-2505",
108111
)
109112
```
110113
@@ -122,7 +125,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
122125
model = AzureAIChatCompletionsModel(
123126
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
124127
credential=DefaultAzureCredentialAsync(),
125-
model="mistral-large-2407",
128+
model="mistral-medium-2505",
126129
)
127130
```
128131
@@ -157,6 +160,7 @@ You can also compose operations as needed in **chains**. Let's now use a prompt
157160

158161
```python
159162
from langchain_core.output_parsers import StrOutputParser
163+
from langchain_core.prompts import ChatPromptTemplate
160164
161165
system_template = "Translate the following into {language}:"
162166
prompt_template = ChatPromptTemplate.from_messages(
@@ -199,7 +203,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
199203
producer = AzureAIChatCompletionsModel(
200204
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
201205
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
202-
model="mistral-large-2407",
206+
model="mistral-medium-2505",
203207
)
204208
205209
verifier = AzureAIChatCompletionsModel(
@@ -365,7 +369,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
365369
model = AzureAIChatCompletionsModel(
366370
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
367371
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
368-
model="mistral-large-2407",
372+
model="mistral-medium-2505",
369373
client_kwargs={"logging_enable": True},
370374
)
371375
```
-13.6 KB
Loading

0 commit comments

Comments
 (0)