Skip to content

Commit 543d2d6

Browse files
Merge pull request #6334 from MicrosoftDocs/main
Auto Publish – main to live - 2025-07-31 22:13 UTC
2 parents 5239fb9 + 3fe9473 commit 543d2d6

File tree

131 files changed

+336
-292
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

131 files changed

+336
-292
lines changed

articles/ai-foundry/.openpublishing.redirection.ai-studio.json

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -467,7 +467,7 @@
467467
},
468468
{
469469
"source_path_from_root": "/articles/ai-foundry/foundry-models/supported-languages-openai.md",
470-
"redirect_url": "/azure/ai-services/openai/supported-languages",
470+
"redirect_url": "/azure/ai-foundry/openai/supported-languages",
471471
"redirect_document_id": false
472472
},
473473
{
@@ -945,8 +945,8 @@
945945
},
946946
{
947947
"source_path_from_root": "/articles/ai-studio/quickstarts/assistants.md",
948-
"redirect_url": "/azure/ai-services/openai/assistants-quickstart",
949-
"redirect_document_id": true
948+
"redirect_url": "/azure/ai-foundry/openai/how-to/assistant",
949+
"redirect_document_id": false
950950
},
951951
{
952952
"source_path_from_root": "/articles/ai-studio/how-to/prompt-flow-tools/vector-db-lookup-tool.md",
@@ -1110,7 +1110,7 @@
11101110
},
11111111
{
11121112
"source_path_from_root": "/articles/ai-studio/quickstarts/multimodal-vision.md",
1113-
"redirect_url": "/azure/ai-services/openai/gpt-v-quickstart",
1113+
"redirect_url": "/azure/ai-foundry/openai/gpt-v-quickstart",
11141114
"redirect_document_id": false
11151115
},
11161116
{

articles/ai-foundry/agents/how-to/tools/bing-grounding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ You can ask questions such as "*what is the top news today*" or "*what is the re
2121
Developers and end users don't have access to raw content returned from Grounding with Bing Search. The model response, however, includes citations with links to the websites used to generate the response, and a link to the Bing query used for the search. You can retrieve the **model response** by accessing the data in the thread that was created. These two *references* must be retained and displayed in the exact form provided by Microsoft, as per Grounding with Bing Search's [Use and Display Requirements](https://www.microsoft.com/en-us/bing/apis/grounding-legal#use-and-display-requirements). See the [how to display Grounding with Bing Search results](#how-to-display-grounding-with-bing-search-results) section for details.
2222

2323
>[!IMPORTANT]
24-
> 1. Your usage of Grounding with Bing Search can incur costs. See the [pricing page](https://www.microsoft.com/bing/apis/grounding-pricing) for details.
24+
> 1. Your usage of Grounding with Bing Search can incur costs. See the [pricing page](https://www.microsoft.com/en-us/bing/apis/grounding-pricing) for details.
2525
> 1. By creating and using a Grounding with Bing Search resource through code-first experience, such as Azure CLI, or deploying through deployment template, you agree to be bound by and comply with the terms available at https://www.microsoft.com/en-us/bing/apis/grounding-legal, which may be updated from time to time.
2626
> 1. When you use Grounding with Bing Search, your customer data is transferred outside of the Azure compliance boundary to the Grounding with Bing Search service. Grounding with Bing Search is not subject to the same data processing terms (including location of processing) and does not have the same compliance standards and certifications as the Azure AI Foundry Agent Service, as described in the [Grounding with Bing Search Terms of Use](https://www.microsoft.com/en-us/bing/apis/grounding-legal). It is your responsibility to assess whether use of Grounding with Bing Search in your agent meets your needs and requirements.
2727

articles/ai-foundry/concepts/ai-resources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,6 @@ If not provided by you, the following dependent resources are automatically crea
9090
## Next steps
9191

9292
- [Create a [!INCLUDE [hub-project-name](../includes/hub-project-name.md)]](../how-to/create-projects.md?pivots=hub-project)
93-
- [Quickstart: Analyze images and video in the chat playground](/azure/ai-services/openai/gpt-v-quickstart)
93+
- [Quickstart: Analyze images and video in the chat playground](/azure/ai-foundry/openai/gpt-v-quickstart)
9494
- [Learn more about Azure AI Foundry](../what-is-azure-ai-foundry.md)
9595
- [Learn more about projects](../how-to/create-projects.md?pivots=hub-project)

articles/ai-foundry/concepts/foundry-models-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ To set the public network access flag for the Azure AI Foundry hub:
236236

237237
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing serverless API deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
238238

239-
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for serverless API deployments in private hubs, because private hubs have the public network access flag disabled.
239+
* Currently, [Azure OpenAI On Your Data](/azure/ai-foundry/openai/concepts/use-your-data) support isn't available for serverless API deployments in private hubs, because private hubs have the public network access flag disabled.
240240

241241
* Any network configuration change (for example, enabling or disabling the public network access flag) might take up to five minutes to propagate.
242242

articles/ai-foundry/concepts/model-benchmarks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ Performance metrics are calculated as an aggregate over 14 days, based on 24 tra
107107

108108
| Parameter | Value | Applicable For |
109109
|---------------------------------------|----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
110-
| Region | East US/East US2 | [serverless API deployments](../how-to/model-catalog-overview.md#serverless-api-deployment-pay-per-token-offer-billing) and [Azure OpenAI](/azure/ai-services/openai/overview) |
110+
| Region | East US/East US2 | [serverless API deployments](../how-to/model-catalog-overview.md#serverless-api-deployment-pay-per-token-offer-billing) and [Azure OpenAI](/azure/ai-foundry/openai/overview) |
111111
| Tokens per minute (TPM) rate limit | 30k (180 RPM based on Azure OpenAI) for non-reasoning and 100k for reasoning models <br> N/A (serverless API deployments) | For Azure OpenAI models, selection is available for users with rate limit ranges based on deployment type (serverless API, global, global standard, and so on.) <br> For serverless API deployments, this setting is abstracted. |
112112
| Number of requests | Two requests in a trail for every hour (24 trails per day) | serverless API deployments, Azure OpenAI |
113113
| Number of trails/runs | 14 days with 24 trails per day for 336 runs | serverless API deployments, Azure OpenAI |

articles/ai-foundry/concepts/model-catalog-content-safety.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
title: Guardrails & controls for Models Sold Directly by Azure
33
titleSuffix: Azure AI Foundry
44
description: Learn about content safety for models deployed using serverless API deployments, using Azure AI Foundry.
5-
manager: scottpolly
5+
manager: nitinme
66
ms.service: azure-ai-foundry
77
ms.topic: concept-article
8-
ms.date: 05/19/2025
8+
ms.date: 07/31/2025
99
ms.author: mopeakande
1010
author: msakande
1111
ms.reviewer: ositanachi
@@ -22,7 +22,7 @@ In this article, learn about content safety capabilities for models from the mod
2222

2323
## Content filter defaults
2424

25-
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via serverless API deployments. To learn more about content filtering, see [Understand harm categories](#understand-harm-categories).
25+
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via [serverless API deployments](deployments-overview.md#serverless-api-endpoint). To learn more about content filtering, see [Understand harm categories](#understand-harm-categories).
2626

2727
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the [Azure AI Foundry Models](../../ai-foundry/model-inference/how-to/configure-content-filters.md), you can create configurable filters by selecting the **Content filters** tab within the **Guardrails & controls** page of the Azure AI Foundry portal.
2828

articles/ai-foundry/concepts/rbac-azure-ai-foundry.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -590,7 +590,7 @@ When using Microsoft Entra ID authenticated connections in the chat playground,
590590
591591
## Scenario: Use an existing Azure OpenAI resource
592592
593-
When you create a connection to an existing Azure OpenAI resource, you must also assign roles to your users so they can access the resource. You should assign either the **Cognitive Services OpenAI User** or **Cognitive Services OpenAI Contributor** role, depending on the tasks they need to perform. For information on these roles and the tasks they enable, see [Azure OpenAI roles](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles).
593+
When you create a connection to an existing Azure OpenAI resource, you must also assign roles to your users so they can access the resource. You should assign either the **Cognitive Services OpenAI User** or **Cognitive Services OpenAI Contributor** role, depending on the tasks they need to perform. For information on these roles and the tasks they enable, see [Azure OpenAI roles](/azure/ai-foundry/openai/how-to/role-based-access-control#azure-openai-roles).
594594
595595
## Scenario: Use Azure Container Registry
596596
@@ -615,7 +615,7 @@ Azure Application Insights is an optional dependency for Azure AI Foundry hub. T
615615

616616
## Scenario: Provisioned throughput unit procurer
617617

618-
The following example defines a custom role that can procure [provisioned throughput units (PTU)](/azure/ai-services/openai/concepts/provisioned-throughput).
618+
The following example defines a custom role that can procure [provisioned throughput units (PTU)](/azure/ai-foundry/openai/concepts/provisioned-throughput).
619619

620620
```json
621621
{
@@ -659,7 +659,7 @@ The following example defines a custom role that can procure [provisioned throug
659659

660660
## Scenario: Azure OpenAI Assistants API
661661

662-
The following example defines a role for a developer using [Azure OpenAI Assistants](/azure/ai-services/openai/how-to/assistant).
662+
The following example defines a role for a developer using [Azure OpenAI Assistants](/azure/ai-foundry/openai/how-to/assistant).
663663

664664
```json
665665
{

articles/ai-foundry/concepts/vulnerability-management.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ manager: scottpolly
66
ms.service: azure-ai-foundry
77
ms.custom:
88
- build-2024
9+
- hub-only
910
ms.topic: concept-article
1011
ms.date: 04/29/2025
1112
ms.reviewer: deeikele

articles/ai-foundry/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ metadata:
1414
author: sdgilley
1515
title: Azure AI Foundry frequently asked questions
1616
summary: |
17-
FAQ for [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs). If you can't find answers to your questions in this document, and still need help check the [Azure AI services support options guide](../ai-services/cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context). Azure OpenAI is part of Azure AI services.
17+
FAQ for [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs). If you can't find answers to your questions in this document, and still need help check the [Azure AI services support options guide](../ai-services/cognitive-services-support-options.md?context=/azure/ai-foundry/openai/context/context). Azure OpenAI is part of Azure AI services.
1818
sections:
1919
- name: General questions
2020
questions:

articles/ai-foundry/foundry-models/concepts/deployment-types.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Key use cases include:
7878

7979
Data zone standard deployments are available in the same Azure AI Foundry resource as all other AI Foundry Models deployment types but allow you to leverage Azure global infrastructure to dynamically route traffic to the data center within the Microsoft defined data zone with the best availability for each request. Data zone standard provides higher default quotas than our Azure geography-based deployment types.
8080

81-
Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [Quotas and limits](/azure/ai-services/openai/quotas-limits#usage-tiers) page to learn more. For workloads that require low latency variance at large volume, we recommend leveraging the provisioned deployment offerings.
81+
Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [Quotas and limits](/azure/ai-foundry/openai/quotas-limits#usage-tiers) page to learn more. For workloads that require low latency variance at large volume, we recommend leveraging the provisioned deployment offerings.
8282

8383
## Data zone provisioned
8484

@@ -110,7 +110,7 @@ Standard deployments are optimized for low to medium volume workloads with high
110110

111111
**SKU name in code:** `ProvisionedManaged`
112112

113-
Provisioned deployments allow you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. Learn more from our [Provisioned throughput concepts article](/azure/ai-services/openai/concepts/provisioned-throughput).
113+
Provisioned deployments allow you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. Learn more from our [Provisioned throughput concepts article](/azure/ai-foundry/openai/concepts/provisioned-throughput).
114114

115115

116116
## Control deployment options

0 commit comments

Comments
 (0)