Skip to content

Commit f3d044f

Browse files
Merge pull request #3493 from sdgilley/sdg-freshness
freshness pass - langchain.md & llama-index.md
2 parents 96a73c0 + 72d43ac commit f3d044f

File tree

2 files changed

+17
-17
lines changed

2 files changed

+17
-17
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 11/04/2024
10+
ms.date: 03/11/2025
1111
ms.reviewer: fasantia
1212
ms.author: sgilley
1313
author: sdgilley
@@ -21,7 +21,7 @@ Models deployed to [Azure AI Foundry](https://ai.azure.com) can be used with Lan
2121

2222
- **Using the Azure AI model inference API:** All models deployed to Azure AI Foundry support the [Azure AI model inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LangChain, install the extensions `langchain-azure-ai`.
2323

24-
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LangChain. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with LangChain, install the extension specific for the model you want to use, like `langchain-openai` or `langchain-cohere`.
24+
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LangChain. Those extensions might include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with LangChain, install the extension specific for the model you want to use, like `langchain-openai` or `langchain-cohere`.
2525

2626
In this tutorial, you learn how to use the packages `langchain-azure-ai` to build applications with LangChain.
2727

@@ -38,7 +38,7 @@ To run this tutorial, you need:
3838
pip install langchain-core
3939
```
4040

41-
* In this example, we are working with the Azure AI model inference API, hence we install the following packages:
41+
* In this example, we're working with the Azure AI model inference API, hence we install the following packages:
4242
4343
```bash
4444
pip install -U langchain-azure-ai
@@ -65,7 +65,7 @@ export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
6565
export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
6666
```
6767
68-
Once configured, create a client to connect to the endpoint. In this case, we are working with a chat completions model hence we import the class `AzureAIChatCompletionsModel`.
68+
Once configured, create a client to connect to the endpoint. In this case, we're working with a chat completions model hence we import the class `AzureAIChatCompletionsModel`.
6969

7070
```python
7171
import os
@@ -98,7 +98,7 @@ model = AzureAIChatCompletionsModel(
9898
> [!NOTE]
9999
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
100100

101-
If you are planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
101+
If you're planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
102102

103103
```python
104104
from azure.identity.aio import (
@@ -127,7 +127,7 @@ model = AzureAIChatCompletionsModel(
127127
128128
## Use chat completions models
129129
130-
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To simply call the model, we can pass in a list of messages to the `invoke` method.
130+
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To call the model, we can pass in a list of messages to the `invoke` method.
131131

132132
```python
133133
from langchain_core.messages import HumanMessage, SystemMessage
@@ -140,7 +140,7 @@ messages = [
140140
model.invoke(messages)
141141
```
142142

143-
You can also compose operations as needed in what's called **chains**. Let's now use a prompt template to translate sentences:
143+
You can also compose operations as needed in **chains**. Let's now use a prompt template to translate sentences:
144144
145145
```python
146146
from langchain_core.output_parsers import StrOutputParser
@@ -178,7 +178,7 @@ chain.invoke({"language": "italian", "text": "hi"})
178178

179179
Models deployed to Azure AI Foundry support the Azure AI model inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
180180

181-
In the following example, we create two model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../model-inference/overview.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
181+
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Azure AI model inference service](../../model-inference/overview.md) and hence we're passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
182182

183183
```python
184184
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -254,7 +254,7 @@ chain.invoke({"topic": "living in a foreign country"})
254254
255255
## Use embeddings models
256256
257-
In the same way, you create an LLM client, you can connect to an embeddings model. In the following example, we are setting the environment variable to now point to an embeddings model:
257+
In the same way, you create an LLM client, you can connect to an embeddings model. In the following example, we're setting the environment variable to now point to an embeddings model:
258258

259259
```bash
260260
export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"

articles/ai-foundry/how-to/develop/llama-index.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 11/04/2024
10+
ms.date: 03/11/2025
1111
ms.reviewer: fasantia
1212
ms.author: sgilley
1313
author: sdgilley
@@ -21,9 +21,9 @@ Models deployed to [Azure AI Foundry](https://ai.azure.com) can be used with Lla
2121

2222
- **Using the Azure AI model inference API:** All models deployed to Azure AI Foundry support the [Azure AI model inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LlamaIndex, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
2323

24-
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
24+
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions might include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
2525

26-
In this example, we are working with the **Azure AI model inference API**.
26+
In this example, we're working with the **Azure AI model inference API**.
2727

2828
## Prerequisites
2929

@@ -42,7 +42,7 @@ To run this tutorial, you need:
4242
pip install llama-index
4343
```
4444

45-
* In this example, we are working with the Azure AI model inference API, hence we install the following packages:
45+
* In this example, we're working with the Azure AI model inference API, hence we install the following packages:
4646
4747
```bash
4848
pip install -U llama-index-llms-azure-inference
@@ -117,7 +117,7 @@ llm = AzureAICompletionsModel(
117117
> [!NOTE]
118118
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
119119
120-
If you are planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
120+
If you're planning to use asynchronous calling, it's a best practice to use the asynchronous version for the credentials:
121121
122122
```python
123123
from azure.identity.aio import (
@@ -133,7 +133,7 @@ llm = AzureAICompletionsModel(
133133
134134
### Azure OpenAI models and Azure AI model inference service
135135
136-
If you are using Azure OpenAI service or [Azure AI model inference service](../../model-inference/overview.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter in case you need to select a specific `api_version`.
136+
If you're using Azure OpenAI service or [Azure AI model inference service](../../model-inference/overview.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter in case you need to select a specific `api_version`.
137137

138138
For the [Azure AI model inference service](../../model-inference/overview.md), you need to pass `model_name` parameter:
139139

@@ -216,7 +216,7 @@ The `complete` method is still available for model of type `chat-completions`. O
216216
217217
## Use embeddings models
218218
219-
In the same way you create an LLM client, you can connect to an embeddings model. In the following example, we are setting the environment variable to now point to an embeddings model:
219+
In the same way you create an LLM client, you can connect to an embeddings model. In the following example, we're setting the environment variable to now point to an embeddings model:
220220
221221
```bash
222222
export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
@@ -260,7 +260,7 @@ Settings.llm = llm
260260
Settings.embed_model = embed_model
261261
```
262262
263-
However, there are scenarios where you want to use a general model for most of the operations but a specific one for a given task. On those cases, it's useful to set the LLM or embedding model you are using for each LlamaIndex construct. In the following example, we set a specific model:
263+
However, there are scenarios where you want to use a general model for most of the operations but a specific one for a given task. On those cases, it's useful to set the LLM or embedding model you're using for each LlamaIndex construct. In the following example, we set a specific model:
264264
265265
```python
266266
from llama_index.core.evaluation import RelevancyEvaluator

0 commit comments

Comments
 (0)