Skip to content

Commit 39ff02e

Browse files
committed
switch model
1 parent ed4baa6 commit 39ff02e

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ To run this tutorial, you need:
3131

3232
* An [Azure subscription](https://azure.microsoft.com).
3333

34-
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-medium-2505` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
34+
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large-2411` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
3535
* Python 3.9 or later installed, including pip.
3636
* LangChain installed. You can do it with:
3737

@@ -76,7 +76,7 @@ Once configured, create a client to connect with the chat model by using the `in
7676
```python
7777
from langchain.chat_models import init_chat_model
7878
79-
llm = init_chat_model(model="mistral-medium-2505", model_provider="azure_ai")
79+
llm = init_chat_model(model="Mistral-Large-2411", model_provider="azure_ai")
8080
```
8181
8282
You can also use the class `AzureAIChatCompletionsModel` directly.
@@ -97,7 +97,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
9797
model = AzureAIChatCompletionsModel(
9898
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
9999
credential=DefaultAzureCredential(),
100-
model="mistral-medium-2505",
100+
model="Mistral-Large-2411",
101101
)
102102
```
103103
@@ -115,7 +115,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
115115
model = AzureAIChatCompletionsModel(
116116
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
117117
credential=DefaultAzureCredentialAsync(),
118-
model="mistral-medium-2505",
118+
model="Mistral-Large-2411",
119119
)
120120
```
121121
@@ -169,7 +169,7 @@ chain.invoke({"language": "italian", "text": "hi"})
169169
170170
Models deployed to Azure AI Foundry support the Model Inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
171171
172-
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Medium` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
172+
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
173173
174174
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_producer_verifier)]
175175
-58.8 KB
Loading

0 commit comments

Comments
 (0)