You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- An Azure AI project as explained at [Create a project in Azure AI Foundry portal](../create-projects.md).
21
-
- A model supporting the [Azure AI Model Inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md?tabs=python) deployed. In this example, we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
20
+
- An Azure AI project as explained at [Create a project for Azure AI Foundry](../create-projects.md).
21
+
- A model that supports the [Azure AI Model Inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md?tabs=python) deployed. This article uses a `Mistral-Large` deployment. You can use any model. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
22
22
23
23
- You can follow the instructions at [Deploy models as serverless API deployments](../deploy-models-serverless.md).
24
24
25
25
- Python **3.10** or later installed, including pip.
26
-
- Semantic Kernel installed. You can do it with:
26
+
- Semantic Kernel installed. You can use the following command:
27
27
28
28
```bash
29
29
pip install semantic-kernel
30
30
```
31
31
32
-
- In this example, we're working with the Model Inference API, so we need to install the relevant Azure dependencies. You can do it with:
32
+
- This article uses the Model Inference API, so install the relevant Azure dependencies. You can use the following command:
33
33
34
34
```bash
35
35
pip install semantic-kernel[azure]
36
36
```
37
37
38
38
## Configure the environment
39
39
40
-
To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
40
+
To use language models deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to your project. Follow these steps to get the information you need from the model:
> The client automatically reads the environment variables `AZURE_AI_INFERENCE_ENDPOINT` and `AZURE_AI_INFERENCE_API_KEY` to connect to the model. However, you can also pass the endpoint and key directly to the client via the `endpoint` and `api_key` parameters on the constructor.
68
+
> The client automatically reads the environment variables `AZURE_AI_INFERENCE_ENDPOINT` and `AZURE_AI_INFERENCE_API_KEY` to connect to the model. You could instead pass the endpoint and key directly to the client by using the `endpoint` and `api_key` parameters on the constructor.
69
69
70
70
Alternatively, if your endpoint support Microsoft Entra ID, you can use the following code to create the client:
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
83
+
>If you use Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
0 commit comments