Skip to content

Commit 790be2f

Browse files
Merge pull request #2508 from santiagxf/santiagxf-patch-1
Update langchain.md
2 parents 1cc808f + 4b37d48 commit 790be2f

File tree

1 file changed

+14
-18
lines changed

1 file changed

+14
-18
lines changed

articles/ai-studio/how-to/develop/langchain.md

Lines changed: 14 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -30,11 +30,7 @@ In this tutorial, you learn how to use the packages `langchain-azure-ai` to buil
3030
To run this tutorial, you need:
3131

3232
* An [Azure subscription](https://azure.microsoft.com).
33-
* An Azure AI project as explained at [Create a project in Azure AI Foundry portal](../create-projects.md).
34-
* A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large` deployment, but use any model of your preference.
35-
36-
* You can follow the instructions at [Deploy models as serverless APIs](../deploy-models-serverless.md).
37-
33+
* A model deployment supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large-2407` deployment in the [Azure AI model inference](../../../ai-foundry/model-inference/overview.md).
3834
* Python 3.9 or later installed, including pip.
3935
* LangChain installed. You can do it with:
4036

@@ -78,25 +74,13 @@ from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
7874
model = AzureAIChatCompletionsModel(
7975
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
8076
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
77+
model="mistral-large-2407",
8178
)
8279
```
8380

8481
> [!TIP]
8582
> For Azure OpenAI models, configure the client as indicated at [Using Azure OpenAI models](#using-azure-openai-models).
8683

87-
If your endpoint is serving more than one model, like with the [Azure AI model inference service](../../ai-services/model-inference.md) or [GitHub Models](https://github.com/marketplace/models), you have to indicate `model_name` parameter:
88-
89-
```python
90-
import os
91-
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
92-
93-
model = AzureAIChatCompletionsModel(
94-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
95-
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
96-
model_name="mistral-large-2407",
97-
)
98-
```
99-
10084
You can use the following code to create the client if your endpoint supports Microsoft Entra ID:
10185

10286
```python
@@ -129,6 +113,18 @@ model = AzureAIChatCompletionsModel(
129113
)
130114
```
131115
116+
If your endpoint is serving one model, like with the Serverless API Endpoints, you don't have to indicate `model_name` parameter:
117+
118+
```python
119+
import os
120+
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
121+
122+
model = AzureAIChatCompletionsModel(
123+
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
124+
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
125+
)
126+
```
127+
132128
## Use chat completions models
133129

134130
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To simply call the model, we can pass in a list of messages to the `invoke` method.

0 commit comments

Comments
 (0)