You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/langchain.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: how-to
10
-
ms.date: 11/04/2024
10
+
ms.date: 03/11/2025
11
11
ms.reviewer: fasantia
12
12
ms.author: sgilley
13
13
author: sdgilley
@@ -21,7 +21,7 @@ Models deployed to [Azure AI Foundry](https://ai.azure.com) can be used with Lan
21
21
22
22
-**Using the Azure AI model inference API:** All models deployed to Azure AI Foundry support the [Azure AI model inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LangChain, install the extensions `langchain-azure-ai`.
23
23
24
-
-**Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LangChain. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with LangChain, install the extension specific for the model you want to use, like `langchain-openai` or `langchain-cohere`.
24
+
-**Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LangChain. Those extensions might include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with LangChain, install the extension specific for the model you want to use, like `langchain-openai` or `langchain-cohere`.
25
25
26
26
In this tutorial, you learn how to use the packages `langchain-azure-ai` to build applications with LangChain.
27
27
@@ -38,7 +38,7 @@ To run this tutorial, you need:
38
38
pip install langchain-core
39
39
```
40
40
41
-
* In this example, we are working with the Azure AI model inference API, hence we install the following packages:
41
+
* In this example, we're working with the Azure AI model inference API, hence we install the following packages:
Once configured, create a client to connect to the endpoint. In this case, we are working with a chat completions model hence we import the class `AzureAIChatCompletionsModel`.
68
+
Once configured, create a client to connect to the endpoint. In this case, we're working with a chat completions model hence we import the class `AzureAIChatCompletionsModel`.
69
69
70
70
```python
71
71
import os
@@ -98,7 +98,7 @@ model = AzureAIChatCompletionsModel(
98
98
> [!NOTE]
99
99
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
100
100
101
-
If you are planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
101
+
If you're planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
102
102
103
103
```python
104
104
from azure.identity.aio import (
@@ -127,7 +127,7 @@ model = AzureAIChatCompletionsModel(
127
127
128
128
## Use chat completions models
129
129
130
-
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To simply call the model, we can pass in a list of messages to the `invoke` method.
130
+
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface forinteracting with them. To call the model, we can passin a list of messages to the `invoke` method.
131
131
132
132
```python
133
133
from langchain_core.messages import HumanMessage, SystemMessage
@@ -140,7 +140,7 @@ messages = [
140
140
model.invoke(messages)
141
141
```
142
142
143
-
You can also compose operations as needed in what's called **chains**. Let's now use a prompt template to translate sentences:
143
+
You can also compose operations as needed in**chains**. Let's now use a prompt template to translate sentences:
144
144
145
145
```python
146
146
from langchain_core.output_parsers import StrOutputParser
Models deployed to Azure AI Foundry support the Azure AI model inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
180
180
181
-
In the following example, we create two model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../model-inference/overview.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
181
+
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Azure AI model inference service](../../model-inference/overview.md) and hence we're passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
182
182
183
183
```python
184
184
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -254,7 +254,7 @@ chain.invoke({"topic": "living in a foreign country"})
254
254
255
255
## Use embeddings models
256
256
257
-
In the same way, you create an LLM client, you can connect to an embeddings model. In the following example, we are setting the environment variable to now point to an embeddings model:
257
+
In the same way, you create an LLM client, you can connect to an embeddings model. In the following example, we're setting the environment variable to now point to an embeddings model:
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/llama-index.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: how-to
10
-
ms.date: 11/04/2024
10
+
ms.date: 03/11/2025
11
11
ms.reviewer: fasantia
12
12
ms.author: sgilley
13
13
author: sdgilley
@@ -21,9 +21,9 @@ Models deployed to [Azure AI Foundry](https://ai.azure.com) can be used with Lla
21
21
22
22
-**Using the Azure AI model inference API:** All models deployed to Azure AI Foundry support the [Azure AI model inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LlamaIndex, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
23
23
24
-
-**Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
24
+
-**Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions might include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
25
25
26
-
In this example, we are working with the **Azure AI model inference API**.
26
+
In this example, we're working with the **Azure AI model inference API**.
27
27
28
28
## Prerequisites
29
29
@@ -42,7 +42,7 @@ To run this tutorial, you need:
42
42
pip install llama-index
43
43
```
44
44
45
-
* In this example, we are working with the Azure AI model inference API, hence we install the following packages:
45
+
* In this example, we're working with the Azure AI model inference API, hence we install the following packages:
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
119
119
120
-
If you are planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
120
+
If you're planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
### Azure OpenAI models and Azure AI model inference service
135
135
136
-
If you are using Azure OpenAI service or [Azure AI model inference service](../../model-inference/overview.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter in case you need to select a specific `api_version`.
136
+
If you're using Azure OpenAI service or [Azure AI model inference service](../../model-inference/overview.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter incase you need to selecta specific `api_version`.
137
137
138
138
For the [Azure AI model inference service](../../model-inference/overview.md), you need to pass `model_name` parameter:
139
139
@@ -216,7 +216,7 @@ The `complete` method is still available for model of type `chat-completions`. O
216
216
217
217
## Use embeddings models
218
218
219
-
In the same way you create an LLM client, you can connect to an embeddings model. In the following example, we are setting the environment variable to now point to an embeddings model:
219
+
In the same way you create an LLM client, you can connect to an embeddings model. In the following example, we're setting the environment variable to now point to an embeddings model:
However, there are scenarios where you want to use a general model for most of the operations but a specific one for a given task. On those cases, it's useful to set the LLM or embedding model you are using for each LlamaIndex construct. In the following example, we set a specific model:
263
+
However, there are scenarios where you want to use a general model for most of the operations but a specific one for a given task. On those cases, it's useful to set the LLM or embedding model you're using for each LlamaIndex construct. In the following example, we set a specific model:
264
264
265
265
```python
266
266
from llama_index.core.evaluation import RelevancyEvaluator
0 commit comments