Skip to content

Commit a12267f

Browse files
committed
fixes
1 parent 69e29ed commit a12267f

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

articles/ai-studio/how-to/develop/llama-index.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,11 @@ In this example, we are working with the **Azure AI model inference API**.
2525

2626
## Prerequisites
2727

28-
To run this tutorial you need:
28+
To run this tutorial, you need:
2929

3030
1. An [Azure subscription](https://azure.microsoft.com).
3131
2. An Azure AI hub resource as explained at [How to create and manage an Azure AI Studio hub](../create-azure-ai-resource.md).
32-
3. A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
32+
3. A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
3333

3434
* You can follow the instructions at [Deploy models as serverless APIs](../deploy-models-serverless.md).
3535

@@ -49,18 +49,18 @@ To run this tutorial you need:
4949

5050
## Configure the environment
5151

52-
To use LLMs deployed in Azure AI studio you need the endpoint and credentials to connect to it. The parameter `model_name` is not required for endpoints serving a single model, like Managed Online Endpoints. Follow this steps to get the information you need from the model you want to use:
52+
To use LLMs deployed in Azure AI studio, you need the endpoint and credentials to connect to it. The parameter `model_name` is not required for endpoints serving a single model, like Managed Online Endpoints. Follow these steps to get the information you need from the model you want to use:
5353

5454
1. Go to the [Azure AI studio](https://ai.azure.com/).
55-
2. Go to deployments and select the model you have deployed as indicated in the prerequisites.
55+
2. Go to deployments and select the model you deployed as indicated in the prerequisites.
5656
3. Copy the endpoint URL and the key.
5757

5858
:::image type="content" source="../../media/how-to/inference/serverless-endpoint-url-keys.png" alt-text="Screenshot of the option to copy endpoint URI and keys from an endpoint." lightbox="../../media/how-to/inference/serverless-endpoint-url-keys.png":::
5959

6060
> [!TIP]
6161
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
6262
63-
In this scenario, we have placed both the endpoint URL and key in the following environment variables:
63+
In this scenario, we placed both the endpoint URL and key in the following environment variables:
6464
6565
```bash
6666
export AZURE_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
@@ -79,7 +79,7 @@ llm = AzureAICompletionsModel(
7979
)
8080
```
8181
82-
Alternatively, if you endpoint support Microsoft Entra ID, you can use the following code to create the client:
82+
Alternatively, if your endpoint support Microsoft Entra ID, you can use the following code to create the client:
8383
8484
```python
8585
from azure.identity import DefaultAzureCredential
@@ -119,7 +119,7 @@ llm = AzureAICompletionsModel(
119119
)
120120
```
121121

122-
For parameters extra parameters that are not supported by the Azure AI model inference API but that are available in the underlying model, you can use the `model_extras` argument. In the following example, the parameter `safe_prompt`, only available for Mistral models, is being passed.
122+
Parameters not supported in the Azure AI model inference API ([reference](../../reference/reference-model-inference-chat-completions.md)) but available in the underlying model, you can use the `model_extras` argument. In the following example, the parameter `safe_prompt`, only available for Mistral models, is being passed.
123123

124124
```python
125125
llm = AzureAICompletionsModel(
@@ -178,7 +178,7 @@ embed_model = AzureAIEmbeddingsModel(
178178
179179
## Configure the models used by your code
180180
181-
You can use the LLM or embeddings model client individually in the code you develop with LlamaIndex or you can configure the entire session using the `Settings` options. Configuring the session has the advantage that then all your code will use the same models for all the operations.
181+
You can use the LLM or embeddings model client individually in the code you develop with LlamaIndex or you can configure the entire session using the `Settings` options. Configuring the session has the advantage of all your code using the same models for all the operations.
182182
183183
```python
184184
from llama_index.core import Settings
@@ -187,15 +187,15 @@ Settings.llm = llm
187187
Settings.embed_model = embed_model
188188
```
189189
190-
However, there are scenarios where you want to use a general model for most of the operations but an specific one for a given task. On those cases, it's useful to set the LLM or embedding model your are using for each LlamaIndex construct. In the following example, we set an specific model:
190+
However, there are scenarios where you want to use a general model for most of the operations but a specific one for a given task. On those cases, it's useful to set the LLM or embedding model you are using for each LlamaIndex construct. In the following example, we set a specific model:
191191
192192
```python
193193
from llama_index.core.evaluation import RelevancyEvaluator
194194
195195
relevancy_evaluator = RelevancyEvaluator(llm=llm)
196196
```
197197
198-
In general, you will use a combination of both strategies.
198+
In general, you use a combination of both strategies.
199199
200200
## Related content
201201

0 commit comments

Comments
 (0)