Skip to content

Commit fa1420f

Browse files
committed
clearnup
1 parent 0bc97d0 commit fa1420f

File tree

4 files changed

+26
-2103
lines changed

4 files changed

+26
-2103
lines changed

articles/ai-studio/how-to/develop/llama-index.md

Lines changed: 22 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -11,44 +11,41 @@ ms.author: eur
1111
author: eric-urban
1212
---
1313

14-
# Develop application with LlamaIndex and Azure AI studio
14+
# Develop applications with LlamaIndex and Azure AI studio
1515

16-
In this article, you learn how to use [`llama-index`](https://github.com/run-llama/llama_index) with models deployed from the Azure AI model catalog deployed to Azure AI studio.
16+
In this article, you learn how to use [LlamaIndex](https://github.com/run-llama/llama_index) with models deployed from the Azure AI model catalog deployed to Azure AI studio.
1717

18-
## Prerequisites
19-
20-
To run this tutorial you need:
18+
Models deployed to Azure AI studio can be used with LlamaIndex in two ways:
2119

22-
1. An [Azure subscription](https://azure.microsoft.com).
23-
2. An Azure AI hub resource as explained at [How to create and manage an Azure AI Studio hub](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/create-azure-ai-resource).
24-
3. A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like Cohere Embed V3.
20+
- **Using the Azure AI model inference API:** All models deployed to Azure AI studio support the Azure AI model inference API, which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LlamaIndex, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
2521

26-
* You can follow the instructions at [Deploy models as serverless APIs](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-serverless).
22+
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
2723

28-
4. A Python environment.
24+
In this example, we are working with the Azure AI model inference API.
2925

26+
## Prerequisites
3027

31-
## Install dependencies
32-
33-
Ensure you have `llama-index` installed:
34-
35-
```bash
36-
pip install llama-index
37-
```
28+
To run this tutorial you need:
3829

39-
Models deployed to Azure AI studio or Azure Machine Learning can be used with LlamaIndex in two ways:
30+
1. An [Azure subscription](https://azure.microsoft.com).
31+
2. An Azure AI hub resource as explained at [How to create and manage an Azure AI Studio hub](../how-to/create-azure-ai-resource).
32+
3. A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
4033

41-
- **Using the Azure AI model inference API:** All models deployed to Azure AI studio and Azure Machine Learning support the Azure AI model inference API, which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with `llama-index`, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
34+
* You can follow the instructions at [Deploy models as serverless APIs](../how-to/deploy-models-serverless).
4235

43-
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for `llama-index`. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
36+
4. Python 3.8 or later installed, including pip.
37+
5. LlamaIndex installed. You can do it with:
4438

39+
```bash
40+
pip install llama-index
41+
```
4542

46-
In this example, we are working with the Azure AI model inference API, hence we install the following packages:
43+
6. In this example, we are working with the Azure AI model inference API, hence we install the following packages:
4744

48-
```bash
49-
pip install -U llama-index-llms-azure-inference
50-
pip install -U llama-index-embeddings-azure-inference
51-
```
45+
```bash
46+
pip install -U llama-index-llms-azure-inference
47+
pip install -U llama-index-embeddings-azure-inference
48+
```
5249

5350
## Configure the environment
5451

0 commit comments

Comments
 (0)