Skip to content

Commit b72c048

Browse files
authored
Update langchain.md
1 parent 6f2283b commit b72c048

File tree

1 file changed

+22
-11
lines changed

1 file changed

+22
-11
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,11 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
5151
[!INCLUDE [tip-left-pane](../../includes/tip-left-pane.md)]
5252
5353
1. Go to the [Azure AI Foundry](https://ai.azure.com/).
54+
5455
1. Open the project where the model is deployed, if it isn't already open.
56+
5557
1. Go to **Models + endpoints** and select the model you deployed as indicated in the prerequisites.
58+
5659
1. Copy the endpoint URL and the key.
5760

5861
:::image type="content" source="../../media/how-to/inference/serverless-endpoint-url-keys.png" alt-text="Screenshot of the option to copy endpoint URI and keys from an endpoint." lightbox="../../media/how-to/inference/serverless-endpoint-url-keys.png":::
@@ -63,11 +66,19 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
6366
In this scenario, we placed both the endpoint URL and key in the following environment variables:
6467
6568
```bash
66-
export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
69+
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
6770
export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
6871
```
6972
70-
Once configured, create a client to connect to the endpoint. In this case, we're working with a chat completions model hence we import the class `AzureAIChatCompletionsModel`.
73+
Once configured, create a client to connect with the chat model by using the `init_chat_model`. For Azure OpenAI models, configure the client as indicated at [Using Azure OpenAI models](#using-azure-openai-models).
74+
75+
```python
76+
from langchain.chat_models import init_chat_model
77+
78+
llm = init_chat_model(model="mistral-large-2407", model_provider="azure_ai")
79+
```
80+
81+
You can also use the class `AzureAIChatCompletionsModel` directly.
7182
7283
```python
7384
import os
@@ -80,8 +91,8 @@ model = AzureAIChatCompletionsModel(
8091
)
8192
```
8293
83-
> [!TIP]
84-
> For Azure OpenAI models, configure the client as indicated at [Using Azure OpenAI models](#using-azure-openai-models).
94+
> [!CAUTION]
95+
> **Breaking change:** Parameter `model_name` was renamed `model` in version `0.1.3`.
8596
8697
You can use the following code to create the client if your endpoint supports Microsoft Entra ID:
8798
@@ -93,7 +104,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
93104
model = AzureAIChatCompletionsModel(
94105
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
95106
credential=DefaultAzureCredential(),
96-
model_name="mistral-large-2407",
107+
model="mistral-large-2407",
97108
)
98109
```
99110
@@ -111,7 +122,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
111122
model = AzureAIChatCompletionsModel(
112123
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
113124
credential=DefaultAzureCredentialAsync(),
114-
model_name="mistral-large-2407",
125+
model="mistral-large-2407",
115126
)
116127
```
117128
@@ -188,13 +199,13 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
188199
producer = AzureAIChatCompletionsModel(
189200
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
190201
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
191-
model_name="mistral-large-2407",
202+
model="mistral-large-2407",
192203
)
193204
194205
verifier = AzureAIChatCompletionsModel(
195206
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
196207
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
197-
model_name="mistral-small",
208+
model="mistral-small",
198209
)
199210
```
200211
@@ -271,7 +282,7 @@ from langchain_azure_ai.embeddings import AzureAIEmbeddingsModel
271282
embed_model = AzureAIEmbeddingsModel(
272283
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
273284
credential=os.environ['AZURE_INFERENCE_CREDENTIAL'],
274-
model_name="text-embedding-3-large",
285+
model="text-embedding-3-large",
275286
)
276287
```
277288
@@ -328,7 +339,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
328339
llm = AzureAIChatCompletionsModel(
329340
endpoint="https://<resource>.services.ai.azure.com/models",
330341
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
331-
model_name="<model-name>",
342+
model="<model-name>",
332343
api_version="2024-05-01-preview",
333344
)
334345
```
@@ -370,7 +381,7 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
370381
model = AzureAIChatCompletionsModel(
371382
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
372383
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
373-
model_name="mistral-large-2407",
384+
model="mistral-large-2407",
374385
client_kwargs={"logging_enable": True},
375386
)
376387
```

0 commit comments

Comments
 (0)