Skip to content

Commit 916d8f5

Browse files
authored
Merge pull request #5752 from sdgilley/sdg-freshness
freshness update - articles/ai-foundry/how-to/develop/llama-index.md
2 parents 68014d7 + bd0ca97 commit 916d8f5

File tree

3 files changed

+40
-46
lines changed

3 files changed

+40
-46
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 1 addition & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -47,29 +47,7 @@ To run this tutorial, you need:
4747
4848
## Configure the environment
4949
50-
To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
51-
52-
[!INCLUDE [tip-left-pane](../../includes/tip-left-pane.md)]
53-
54-
1. Go to the [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs).
55-
56-
1. Open the project where the model is deployed, if it isn't already open.
57-
58-
1. Go to **Models + endpoints** and select the model you deployed as indicated in the prerequisites.
59-
60-
1. Copy the endpoint URL and the key.
61-
62-
:::image type="content" source="../../media/how-to/inference/serverless-endpoint-url-keys.png" alt-text="Screenshot of the option to copy endpoint URI and keys from an endpoint." lightbox="../../media/how-to/inference/serverless-endpoint-url-keys.png":::
63-
64-
> [!TIP]
65-
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
66-
67-
In this scenario, set the endpoint URL and key as environment variables. (If the endpoint you copied includes additional text after `/models`, remove it so the URL ends at `/models` as shown below.)
68-
69-
```bash
70-
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
71-
export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
72-
```
50+
[!INCLUDE [set-endpoint](../../includes/set-endpoint.md)]
7351
7452
Once configured, create a client to connect with the chat model by using the `init_chat_model`. For Azure OpenAI models, configure the client as indicated at [Using Azure OpenAI models](#using-azure-openai-models).
7553

articles/ai-foundry/how-to/develop/llama-index.md

Lines changed: 4 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 03/11/2025
10+
ms.date: 06/26/2025
1111
ms.reviewer: fasantia
1212
ms.author: sgilley
1313
author: sdgilley
@@ -54,26 +54,7 @@ To run this tutorial, you need:
5454
5555
## Configure the environment
5656
57-
To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
58-
59-
[!INCLUDE [tip-left-pane](../../includes/tip-left-pane.md)]
60-
61-
1. Go to the [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs).
62-
1. Open the project where the model is deployed, if it isn't already open.
63-
1. Go to **Models + endpoints** and select the model you deployed as indicated in the prerequisites.
64-
1. Copy the endpoint URL and the key.
65-
66-
:::image type="content" source="../../media/how-to/inference/serverless-endpoint-url-keys.png" alt-text="Screenshot of the option to copy endpoint URI and keys from an endpoint." lightbox="../../media/how-to/inference/serverless-endpoint-url-keys.png":::
67-
68-
> [!TIP]
69-
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
70-
71-
In this scenario, we placed both the endpoint URL and key in the following environment variables:
72-
73-
```bash
74-
export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
75-
export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
76-
```
57+
[!INCLUDE [set-endpoint](../../includes/set-endpoint.md)]
7758
7859
Once configured, create a client to connect to the endpoint.
7960
@@ -100,7 +81,7 @@ from llama_index.llms.azure_inference import AzureAICompletionsModel
10081
llm = AzureAICompletionsModel(
10182
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
10283
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
103-
model_name="mistral-large-2407",
84+
model_name="mistral-large-2411",
10485
)
10586
```
10687
@@ -146,7 +127,7 @@ from llama_index.llms.azure_inference import AzureAICompletionsModel
146127
llm = AzureAICompletionsModel(
147128
endpoint="https://<resource>.services.ai.azure.com/models",
148129
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
149-
model_name="mistral-large-2407",
130+
model_name="mistral-large-2411",
150131
)
151132
```
152133

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
---
2+
title: Include file
3+
description: Include file
4+
author: sdgilley
5+
ms.reviewer: sgilley
6+
ms.author: sgilley
7+
ms.service: azure-ai-foundry
8+
ms.topic: include
9+
ms.date: 06/26/2025
10+
ms.custom: include
11+
---
12+
13+
To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
14+
15+
[!INCLUDE [tip-left-pane](tip-left-pane.md)]
16+
17+
1. Go to the [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs).
18+
19+
1. Open the project where the model is deployed, if it isn't already open.
20+
21+
1. Go to **Models + endpoints** and select the model you deployed as indicated in the prerequisites.
22+
23+
1. Copy the endpoint URL and the key.
24+
25+
:::image type="content" source="../media/how-to/inference/serverless-endpoint-url-keys.png" alt-text="Screenshot of the option to copy endpoint URI and keys from an endpoint." lightbox="../media/how-to/inference/serverless-endpoint-url-keys.png":::
26+
27+
> [!TIP]
28+
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
29+
30+
In this scenario, set the endpoint URL and key as environment variables. (If the endpoint you copied includes additional text after `/models`, remove it so the URL ends at `/models` as shown below.)
31+
32+
```bash
33+
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
34+
export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
35+
```

0 commit comments

Comments
 (0)