Skip to content

Commit cc53a2d

Browse files
Merge pull request #7737 from TimShererWithAquent/us496641-10
Freshness Edit: AI Foundry: Develop applications with Semantic Kernel and Azure AI Foundry
2 parents 7a84b82 + a99a371 commit cc53a2d

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/ai-foundry/how-to/develop/semantic-kernel.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: Develop applications with Semantic Kernel and Azure AI Foundry
2+
title: Develop Applications with Semantic Kernel and Azure AI Foundry
33
titleSuffix: Azure AI Foundry
4-
description: Develop applications with Semantic Kernel and Azure AI Foundry.
4+
description: Learn how to Develop applications with Semantic Kernel and Azure AI Foundry with models deployed from the Azure AI model catalog.
55
author: lgayhardt
66
ms.author: lagayhar
77
ms.reviewer: taochen
8-
ms.date: 02/27/2025
8+
ms.date: 10/20/2025
99
ms.topic: how-to
1010
ms.service: azure-ai-foundry
1111
---
@@ -17,27 +17,27 @@ In this article, you learn how to use [Semantic Kernel](/semantic-kernel/overvie
1717
## Prerequisites
1818

1919
- [!INCLUDE [azure-subscription](../../includes/azure-subscription.md)]
20-
- An Azure AI project as explained at [Create a project in Azure AI Foundry portal](../create-projects.md).
21-
- A model supporting the [Azure AI Model Inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md?tabs=python) deployed. In this example, we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
20+
- An Azure AI project as explained at [Create a project for Azure AI Foundry](../create-projects.md).
21+
- A model that supports the [Azure AI Model Inference API](../../../ai-foundry/model-inference/reference/reference-model-inference-api.md?tabs=python) deployed. This article uses a `Mistral-Large` deployment. You can use any model. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
2222

2323
- You can follow the instructions at [Deploy models as serverless API deployments](../deploy-models-serverless.md).
2424

2525
- Python **3.10** or later installed, including pip.
26-
- Semantic Kernel installed. You can do it with:
26+
- Semantic Kernel installed. You can use the following command:
2727

2828
```bash
2929
pip install semantic-kernel
3030
```
3131

32-
- In this example, we're working with the Model Inference API, so we need to install the relevant Azure dependencies. You can do it with:
32+
- This article uses the Model Inference API, so install the relevant Azure dependencies. You can use the following command:
3333

3434
```bash
3535
pip install semantic-kernel[azure]
3636
```
3737

3838
## Configure the environment
3939

40-
To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
40+
To use language models deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to your project. Follow these steps to get the information you need from the model:
4141

4242
[!INCLUDE [tip-left-pane](../../includes/tip-left-pane.md)]
4343

@@ -49,14 +49,14 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
4949
> [!TIP]
5050
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
5151

52-
In this scenario, we placed both the endpoint URL and key in the following environment variables:
52+
This example uses environment variables for both the endpoint URL and key:
5353

5454
```bash
5555
export AZURE_AI_INFERENCE_ENDPOINT="<your-model-endpoint-goes-here>"
5656
export AZURE_AI_INFERENCE_API_KEY="<your-key-goes-here>"
5757
```
5858

59-
Once configured, create a client to connect to the endpoint:
59+
After you configure the endpoint and key, create a client to connect to the endpoint:
6060

6161
```python
6262
from semantic_kernel.connectors.ai.azure_ai_inference import AzureAIInferenceChatCompletion
@@ -65,7 +65,7 @@ chat_completion_service = AzureAIInferenceChatCompletion(ai_model_id="<deploymen
6565
```
6666

6767
> [!TIP]
68-
> The client automatically reads the environment variables `AZURE_AI_INFERENCE_ENDPOINT` and `AZURE_AI_INFERENCE_API_KEY` to connect to the model. However, you can also pass the endpoint and key directly to the client via the `endpoint` and `api_key` parameters on the constructor.
68+
> The client automatically reads the environment variables `AZURE_AI_INFERENCE_ENDPOINT` and `AZURE_AI_INFERENCE_API_KEY` to connect to the model. You could instead pass the endpoint and key directly to the client by using the `endpoint` and `api_key` parameters on the constructor.
6969

7070
Alternatively, if your endpoint support Microsoft Entra ID, you can use the following code to create the client:
7171

@@ -80,7 +80,7 @@ chat_completion_service = AzureAIInferenceChatCompletion(ai_model_id="<deploymen
8080
```
8181

8282
> [!NOTE]
83-
> When using Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
83+
> If you use Microsoft Entra ID, make sure that the endpoint was deployed with that authentication method and that you have the required permissions to invoke it.
8484

8585
### Azure OpenAI models
8686

@@ -104,7 +104,7 @@ chat_completion_service = AzureAIInferenceChatCompletion(
104104
105105
## Inference parameters
106106
107-
You can configure how inference is performed by using the `AzureAIInferenceChatPromptExecutionSettings` class:
107+
You can configure how to perform inference by using the `AzureAIInferenceChatPromptExecutionSettings` class:
108108
109109
```python
110110
from semantic_kernel.connectors.ai.azure_ai_inference import AzureAIInferenceChatPromptExecutionSettings
@@ -119,7 +119,7 @@ execution_settings = AzureAIInferenceChatPromptExecutionSettings(
119119
120120
## Calling the service
121121
122-
Let's first call the chat completion service with a simple chat history:
122+
First, call the chat completion service with a simple chat history:
123123
124124
> [!TIP]
125125
> Semantic Kernel is an asynchronous library, so you need to use the asyncio library to run the code.

0 commit comments

Comments
 (0)