You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/develop/llama-index.md
+22-25Lines changed: 22 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,44 +11,41 @@ ms.author: eur
11
11
author: eric-urban
12
12
---
13
13
14
-
# Develop application with LlamaIndex and Azure AI studio
14
+
# Develop applications with LlamaIndex and Azure AI studio
15
15
16
-
In this article, you learn how to use [`llama-index`](https://github.com/run-llama/llama_index) with models deployed from the Azure AI model catalog deployed to Azure AI studio.
16
+
In this article, you learn how to use [LlamaIndex](https://github.com/run-llama/llama_index) with models deployed from the Azure AI model catalog deployed to Azure AI studio.
17
17
18
-
## Prerequisites
19
-
20
-
To run this tutorial you need:
18
+
Models deployed to Azure AI studio can be used with LlamaIndex in two ways:
21
19
22
-
1. An [Azure subscription](https://azure.microsoft.com).
23
-
2. An Azure AI hub resource as explained at [How to create and manage an Azure AI Studio hub](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/create-azure-ai-resource).
24
-
3. A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like Cohere Embed V3.
20
+
-**Using the Azure AI model inference API:** All models deployed to Azure AI studio support the Azure AI model inference API, which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LlamaIndex, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
25
21
26
-
* You can follow the instructions at [Deploy models as serverless APIs](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-serverless).
22
+
-**Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
27
23
28
-
4. A Python environment.
24
+
In this example, we are working with the Azure AI model inference API.
29
25
26
+
## Prerequisites
30
27
31
-
## Install dependencies
32
-
33
-
Ensure you have `llama-index` installed:
34
-
35
-
```bash
36
-
pip install llama-index
37
-
```
28
+
To run this tutorial you need:
38
29
39
-
Models deployed to Azure AI studio or Azure Machine Learning can be used with LlamaIndex in two ways:
30
+
1. An [Azure subscription](https://azure.microsoft.com).
31
+
2. An Azure AI hub resource as explained at [How to create and manage an Azure AI Studio hub](../how-to/create-azure-ai-resource).
32
+
3. A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
40
33
41
-
-**Using the Azure AI model inference API:** All models deployed to Azure AI studio and Azure Machine Learning support the Azure AI model inference API, which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with `llama-index`, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
34
+
* You can follow the instructions at [Deploy modelsas serverless APIs](../how-to/deploy-models-serverless).
42
35
43
-
-**Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for `llama-index`. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
36
+
4. Python 3.8 or later installed, including pip.
37
+
5. LlamaIndex installed. You can do it with:
44
38
39
+
```bash
40
+
pip install llama-index
41
+
```
45
42
46
-
In this example, we are working with the Azure AI model inference API, hence we install the following packages:
43
+
6. In this example, we are working with the Azure AI model inference API, hence we install the following packages:
0 commit comments