Skip to content

Commit 13d407e

Browse files
committed
2 parents da7c9b3 + 7d8e2be commit 13d407e

File tree

62 files changed

+4069
-402
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+4069
-402
lines changed

articles/ai-foundry/agents/how-to/tools/bing-custom-search.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: nitinme
77
ms.service: azure-ai-foundry
88
ms.subservice: azure-ai-foundry-agent-service
99
ms.topic: how-to
10-
ms.date: 08/15/2025
10+
ms.date: 09/26/2025
1111
author: aahill
1212
ms.author: aahi
1313
ms.custom: azure-ai-agents
@@ -40,7 +40,7 @@ The authorization will happen between Grounding with Bing Custom Search service
4040

4141
Developers and end users don't have access to raw content returned from Grounding with Bing Custom Search. The model response, however, includes citations with links to the websites used to generate the response and is allowed to be stored using the mechanisms provided by the Agents Service. You can retrieve the model response by accessing the data in the thread that was created. These references must be retained and displayed in the exact form provided by Microsoft, as per Grounding with Bing Custom Search's Use and Display Requirements.
4242

43-
43+
Transactions with your Grounding with Bing resource are counted by the number of tool calls per run. You can see how many tool calls are made from the run step.
4444

4545
## Setup
4646

articles/ai-foundry/agents/how-to/tools/bing-grounding.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: nitinme
77
ms.service: azure-ai-foundry
88
ms.subservice: azure-ai-foundry-agent-service
99
ms.topic: how-to
10-
ms.date: 08/07/2025
10+
ms.date: 09/26/2025
1111
author: aahill
1212
ms.author: aahi
1313
ms.custom: azure-ai-agents
@@ -38,6 +38,8 @@ Grounding with Bing returns relevant search results to the customer's model depl
3838
3939
The authorization will happen between Grounding with Bing Search service and Azure AI Foundry Agent Service. Any Bing search query that is generated and sent to Bing for the purposes of grounding is transferred, along with the resource key, outside of the Azure compliance boundary to the Grounding with Bing Search service. Grounding with Bing Search is subject to Bing's terms and do not have the same compliance standards and certifications as the Azure AI Foundry Agent Service, as described in the [Grounding with Bing Search Terms of Use](https://www.microsoft.com/bing/apis/grounding-legal). It is your responsibility to assess whether the use of Grounding with Bing Search in your agent meets your needs and requirements.
4040

41+
Transactions with your Grounding with Bing resource are counted by the number of tool calls per run. You can see how many tool calls are made from the run step.
42+
4143
## Supported capabilities and known issues
4244
- Grounding with Bing Search tool is designed to retrieve real-time information from web, NOT specific web domains.
4345
- NOT Recommended to **summarize** an entire web page.

articles/ai-foundry/agents/how-to/tools/model-context-protocol-samples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Use this article to find code samples for connecting Azure AI Foundry Agent Serv
3232
Create a client object that contains the endpoint for connecting to your AI project and other resources.
3333

3434
> [!NOTE]
35-
> You can find an asynchronous example on [GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/ai/Azure.AI.Agents.Persistent/samples/Sample26_PersistentAgents_MCP.md)
35+
> You can find an asynchronous example on [GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Agents.Persistent/samples)
3636
3737
```csharp
3838
var projectEndpoint = System.Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");

articles/ai-foundry/agents/overview.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: aahi
88
ms.service: azure-ai-foundry
99
ms.subservice: azure-ai-foundry-agent-service
1010
ms.topic: overview
11-
ms.date: 06/25/2025
11+
ms.date: 09/26/2025
1212
ms.custom: azure-ai-agents
1313
---
1414

@@ -117,6 +117,20 @@ Start with the [environment setup](environment-setup.md) and [quickstart](quicks
117117
1. After you create a project, you can deploy a compatible model such as GPT-4o.
118118
1. When you have a deployed model, you can also start making API calls to the service using the SDKs.
119119

120+
## Business Continuity and Disaster Recovery (BCDR) for Agents
121+
122+
To support service resilience, the Azure AI Foundry Agent service relies on customer-provisioned Cosmos DB accounts. This ensures that your agent state can be preserved and recovered in the event of a regional outage.
123+
124+
### Use your own Cosmos DB account
125+
126+
* As an Azure Standard customer, you provision and manage your own single-tenant Cosmos DB account. All agent state is stored in your Cosmos DB.
127+
* Backup and recovery rely on Cosmos DB’s native capabilities, which you control.
128+
* If the primary region becomes unavailable, the agent will automatically become available in the secondary region by connecting to the same Cosmos DB account.
129+
* Since all history is preserved in Cosmos DB, the agent can continue operation with minimal disruption.
130+
131+
### Current guidance
132+
133+
We recommend customers provision and maintain their Cosmos DB account and ensure appropriate backup and recovery policies are configured. This ensures seamless continuity if the primary region becomes unavailable.
120134

121135
## Next steps
122136

articles/ai-foundry/foundry-models/how-to/configure-entra-id.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,21 @@
11
---
22
title: Configure key-less authentication with Microsoft Entra ID
33
titleSuffix: Azure AI Foundry
4-
description: Learn how to configure key-less authorization to use Azure AI Foundry Models with Microsoft Entra ID.
4+
description: Learn how to configure key-less authorization to use Azure AI Foundry Models with Microsoft Entra ID and enhance security.
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
77
ms.topic: how-to
8-
ms.date: 08/29/2025
8+
ms.date: 09/26/2025
99
ms.custom: ignite-2024, github-universe-2024
1010
author: msakande
1111
ms.author: mopeakande
1212
recommendations: false
1313
zone_pivot_groups: azure-ai-models-deployment
1414
ms.reviewer: fasantia
1515
reviewer: santiagxf
16+
ai-usage: ai-assisted
17+
18+
#CustomerIntent: As a developer, I want to configure keyless authentication with Microsoft Entra ID for Azure AI Foundry Models so that I can secure my AI model deployments without relying on API keys and leverage role-based access control for better security and compliance.
1619
---
1720

1821
# Configure key-less authentication with Microsoft Entra ID
@@ -29,6 +32,6 @@ reviewer: santiagxf
2932
[!INCLUDE [bicep](../../foundry-models/includes/configure-entra-id/bicep.md)]
3033
::: zone-end
3134

32-
## Next steps
35+
## Next step
3336

3437
* [Develop applications using Azure AI Foundry Models](../../model-inference/supported-languages.md)

articles/ai-foundry/foundry-models/includes/code-create-chat-client-entra.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,20 @@ manager: nitinme
33
ms.service: azure-ai-foundry
44
ms.subservice: azure-ai-foundry-model-inference
55
ms.topic: include
6-
ms.date: 1/21/2025
6+
ms.date: 09/26/2025
77
ms.author: fasantia
88
author: santiagxf
99
---
1010

1111
# [Python](#tab/python)
1212

13-
Install the package `azure-ai-inference` using your package manager, like pip:
13+
Install the `azure-ai-inference` package, using a package manager like pip:
1414

1515
```bash
1616
pip install azure-ai-inference
1717
```
1818

19-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
19+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions with Microsoft Entra ID:
2020

2121
```python
2222
import os
@@ -32,13 +32,13 @@ client = ChatCompletionsClient(
3232

3333
# [JavaScript](#tab/javascript)
3434

35-
Install the package `@azure-rest/ai-inference` using npm:
35+
Install the `@azure-rest/ai-inference` package with npm:
3636

3737
```bash
3838
npm install @azure-rest/ai-inference
3939
```
4040

41-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
41+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions with Microsoft Entra ID:
4242

4343
```javascript
4444
import ModelClient from "@azure-rest/ai-inference";
@@ -76,7 +76,7 @@ using Azure.Identity;
7676
using Azure.AI.Inference;
7777
```
7878

79-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
79+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions with Microsoft Entra ID:
8080

8181
```csharp
8282
TokenCredential credential = new DefaultAzureCredential();
@@ -108,7 +108,7 @@ Add the package to your project:
108108
</dependency>
109109
```
110110

111-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
111+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions:
112112

113113
```java
114114
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
@@ -118,11 +118,11 @@ ChatCompletionsClient client = new ChatCompletionsClientBuilder()
118118
.buildClient();
119119
```
120120

121-
Explore our [samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/java/reference) to get yourself started.
121+
Explore our [samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/java/reference) to get started.
122122

123123
# [REST](#tab/rest)
124124

125-
Use the reference section to explore the API design and which parameters are available and indicate authentication token in the header `Authorization`. For example, the reference section for [Chat completions](../../model-inference/reference/reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. Notice that the path `/models` is included to the root of the URL:
125+
Use the reference section to explore the API design and see which parameters are available. Indicate the authentication token in the header `Authorization`. For example, the reference section for [Chat completions](../../model-inference/reference/reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. The path `/models` is included in the root of the URL:
126126

127127
__Request__
128128

@@ -132,7 +132,7 @@ Authorization: Bearer <bearer-token>
132132
Content-Type: application/json
133133
```
134134

135-
Tokens have to be issued with scope `https://cognitiveservices.azure.com/.default`.
135+
Tokens must be issued with scope `https://cognitiveservices.azure.com/.default`.
136136

137137
For testing purposes, the easiest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following Azure CLI command:
138138

articles/ai-foundry/foundry-models/includes/configure-entra-id/about-credentials.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@ author: santiagxf
44
ms.author: fasantia
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
7-
ms.date: 01/23/2025
7+
ms.date: 09/26/2025
88
ms.topic: include
99
---
1010

1111
### Options for credential when using Microsoft Entra ID
1212

13-
`DefaultAzureCredential` is an opinionated, ordered sequence of mechanisms for authenticating to Microsoft Entra ID. Each authentication mechanism is a class derived from the `TokenCredential` class and is known as a credential. At runtime, `DefaultAzureCredential` attempts to authenticate using the first credential. If that credential fails to acquire an access token, the next credential in the sequence is attempted, and so on, until an access token is successfully obtained. In this way, your app can use different credentials in different environments without writing environment-specific code.
13+
`DefaultAzureCredential` is an opinionated, ordered sequence of mechanisms for authenticating to Microsoft Entra ID. Each authentication mechanism is a class derived from the `TokenCredential` class and is known as a credential. At runtime, `DefaultAzureCredential` attempts to authenticate by using the first credential. If that credential fails to acquire an access token, the next credential in the sequence is attempted, and so on, until an access token is successfully obtained. In this way, your app can use different credentials in different environments without writing environment-specific code.
1414

15-
When the preceding code runs on your local development workstation, it looks in the environment variables for an application service principal or at locally installed developer tools, such as Visual Studio, for a set of developer credentials. Either approach can be used to authenticate the app to Azure resources during local development.
15+
When the preceding code runs on your local development workstation, it looks in the environment variables for an application service principal or at locally installed developer tools, such as Visual Studio, for a set of developer credentials. You can use either approach to authenticate the app to Azure resources during local development.
1616

1717
When deployed to Azure, this same code can also authenticate your app to other Azure resources. `DefaultAzureCredential` can retrieve environment settings and managed identity configurations to authenticate to other services automatically.
1818

articles/ai-foundry/foundry-models/includes/configure-entra-id/bicep.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ author: santiagxf
44
ms.author: fasantia
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
7-
ms.date: 12/15/2024
7+
ms.date: 09/26/2025
88
ms.topic: include
99
zone_pivot_groups: azure-ai-models-deployment
1010
---
@@ -19,7 +19,7 @@ zone_pivot_groups: azure-ai-models-deployment
1919

2020
## About this tutorial
2121

22-
The example in this article is based on code samples contained in the [Azure-Samples/azureai-model-inference-bicep](https://github.com/Azure-Samples/azureai-model-inference-bicep) repository. To run the commands locally without having to copy or paste file content, use the following commands to clone the repository and go to the folder for your coding language:
22+
The example in this article is based on code samples in the [Azure-Samples/azureai-model-inference-bicep](https://github.com/Azure-Samples/azureai-model-inference-bicep) repository. To run the commands locally without copying or pasting file content, use the following commands to clone the repository and go to the folder for your coding language:
2323

2424
```azurecli
2525
git clone https://github.com/Azure-Samples/azureai-model-inference-bicep
@@ -33,24 +33,24 @@ cd azureai-model-inference-bicep/infra
3333

3434
## Understand the resources
3535

36-
The tutorial helps you create:
36+
In this tutorial, you create the following resources:
3737

38-
> [!div class="checklist"]
39-
> * An Azure AI Foundry (formerly known Azure AI Services) resource with key access disabled. For simplicity, this template doesn't deploy models.
40-
> * A role-assignment for a given security principal with the role **Cognitive Services User**.
4138

42-
You are using the following assets to create those resources:
39+
* An Azure AI Foundry resource (formerly known as Azure AI Services resource) with key access disabled. For simplicity, this template doesn't deploy models.
40+
* A role-assignment for a given security principal with the role **Cognitive Services User**.
4341

44-
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Foundry (formerly known Azure AI Services) resource:
42+
To create these resources, use the following assets:
43+
44+
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Foundry resource:
4545

4646
__modules/ai-services-template.bicep__
4747

4848
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/modules/ai-services-template.bicep":::
4949

5050
> [!TIP]
51-
> Notice that this template can take the parameter `allowKeys` which, when `false` will disable the use of keys in the resource. This configuration is optional.
51+
> This template accepts the `allowKeys` parameter. Set it to `false` to disable key access in the resource. This configuration is optional.
5252
53-
2. Use the template `modules/role-assignment-template.bicep` to describe a role assignment in Azure:
53+
1. Use the template `modules/role-assignment-template.bicep` to describe a role assignment in Azure:
5454

5555
__modules/role-assignment-template.bicep__
5656

@@ -66,36 +66,36 @@ In your console, follow these steps:
6666

6767
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/deploy-entra-id.bicep":::
6868

69-
2. Log into Azure:
69+
1. Sign in to Azure:
7070

7171
```azurecli
7272
az login
7373
```
7474
75-
3. Ensure you are in the right subscription:
75+
1. Make sure you're in the right subscription:
7676
7777
```azurecli
7878
az account set --subscription "<subscription-id>"
7979
```
8080
81-
4. Run the deployment:
81+
1. Run the deployment:
8282
8383
```azurecli
8484
RESOURCE_GROUP="<resource-group-name>"
8585
SECURITY_PRINCIPAL_ID="<your-security-principal-id>"
8686
8787
az deployment group create \
8888
--resource-group $RESOURCE_GROUP \
89-
--securityPrincipalId $SECURITY_PRINCIPAL_ID
89+
--parameters securityPrincipalId=$SECURITY_PRINCIPAL_ID \
9090
--template-file deploy-entra-id.bicep
9191
```
9292
93-
7. The template outputs the Azure AI Foundry Models endpoint that you can use to consume any of the model deployments you have created.
93+
1. The template outputs the Azure AI Foundry Models endpoint that you can use to consume any of the model deployments you created.
9494
9595
9696
## Use Microsoft Entra ID in your code
9797
98-
Once you configured Microsoft Entra ID in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
98+
After you configure Microsoft Entra ID in your resource, update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
9999
100100
[!INCLUDE [code](../code-create-chat-client-entra.md)]
101101
@@ -107,7 +107,7 @@ Once you configured Microsoft Entra ID in your resource, you need to update your
107107
108108
## Disable key-based authentication in the resource
109109
110-
Disabling key-based authentication is advisable when you implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service. You can achieve it by changing the property `disableLocalAuth`:
110+
Disable key-based authentication when you implement Microsoft Entra ID and fully address compatibility or fallback concerns in all the applications that consume the service. Change the `disableLocalAuth` property to disable key-based authentication:
111111
112112
__modules/ai-services-template.bicep__
113113

0 commit comments

Comments
 (0)