Skip to content

Commit ef9d48b

Browse files
authored
Merge pull request #5832 from MicrosoftDocs/main
7/2/2025 11:00 AM IST Publish
2 parents aa4c8ea + c0dbc78 commit ef9d48b

File tree

645 files changed

+1705
-393
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

645 files changed

+1705
-393
lines changed

articles/ai-foundry/agents/concepts/model-region-support.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: Supported models in Azure AI Foundry Agent Service
33
titleSuffix: Azure AI Foundry
44
description: Learn about the models you can use with Azure AI Foundry Agent Service.
@@ -22,12 +22,12 @@ Azure OpenAI provides customers with choices on the hosting structure that fits
2222
- **Standard** is offered with a global deployment option, routing traffic globally to provide higher throughput.
2323
- **Provisioned** is also offered with a global deployment option, allowing customers to purchase and deploy provisioned throughput units across Azure global infrastructure.
2424

25-
All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. To learn more about Azure OpenAI deployment types see [deployment types guide](../../../ai-services/openai/how-to/deployment-types.md).
25+
All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. To learn more about Azure OpenAI deployment types see [deployment types guide](../../openai/how-to/deployment-types.md).
2626

2727
Azure AI Foundry Agent Service supports the following Azure OpenAI models in the listed regions.
2828

2929
> [!NOTE]
30-
> * The following table is for serverless API deployment availability. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](../../../ai-services/openai/concepts/provisioned-throughput.md) in the Azure OpenAI documentation. `GlobalStandard` customers also have access to [global standard models](../../../ai-services/openai/concepts/models.md#global-standard-model-availability).
30+
> * The following table is for serverless API deployment availability. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](../../openai/concepts/provisioned-throughput.md) in the Azure OpenAI documentation. `GlobalStandard` customers also have access to [global standard models](../../openai/concepts/models.md#global-standard-model-availability).
3131
> * [Hub based projects](../../what-is-azure-ai-foundry.md#project-types) are limited to the following models: gpt-4o, gpt-4o-mini, gpt-4, gpt-35-turbo
3232
3333
| REGION | o1 | o3-mini | gpt-4.1, 2025-04-14 | gpt-4.1-mini, 2025-04-14 | gpt-4.1-nano, 2025-04-14 | gpt-4o, 2024-05-13 | gpt-4o, 2024-08-06 | gpt-4o, 2024-11-20 | gpt-4o-mini, 2024-07-18 | gpt-4, 0613 | gpt-4, turbo-2024-04-09 | gpt-4-32k, 0613 | gpt-35-turbo, 1106 | gpt-35-turbo, 0125 |

articles/ai-foundry/agents/faq.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,8 @@ sections:
2525
- question: |
2626
Where is this data stored?
2727
answer: |
28-
Data is stored in a secure, Microsoft-managed storage account that is logically separated.
28+
Basic Setup: Data is stored in a secure, Microsoft-managed storage account that is logically separated.
29+
Standard Setup: Data is stored in your own Azure resources, giving you full ownership and control.
2930
- question: |
3031
How long is this data stored?
3132
answer: |

articles/ai-foundry/agents/how-to/tools/fabric.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -462,4 +462,4 @@ curl --request GET \
462462

463463
## Next steps
464464

465-
[See the full sample for Fabric data agent.](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/samples/agents/sample_agents_fabric.py)
465+
[See the full sample for Fabric data agent.](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-agents/samples/agents_tools/sample_agents_fabric.py)

articles/ai-foundry/agents/how-to/tools/logic-apps.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: 'How to use Logic Apps with Azure AI Foundry Agent Service'
33
titleSuffix: Azure AI Foundry
44
description: Learn how to integrate Logic Apps with Azure AI Agents to execute tasks like sending emails.
@@ -21,7 +21,7 @@ This article demonstrates how to integrate Logic Apps with Azure AI Agents to ex
2121
## Prerequisites
2222

2323
1. Create a Logic App within the same resource group as your Azure AI Project in the Azure portal.
24-
1. Configure your Logic App to send emails by including an HTTP request trigger that accepts JSON with `to`, `subject`, and `body`. See the [Logic App Workflow guide](../../../../ai-services/openai/how-to/assistants-logic-apps.md) for more information.
24+
1. Configure your Logic App to send emails by including an HTTP request trigger that accepts JSON with `to`, `subject`, and `body`. See the [Logic App Workflow guide](../../../openai/how-to/assistants-logic-apps.md) for more information.
2525
1. Set the following environment variables:
2626
- `PROJECT_ENDPOINT`: The Azure AI Agents endpoint.
2727
- `MODEL_DEPLOYMENT_NAME`: The deployment name of the AI model.

articles/ai-foundry/agents/how-to/virtual-networks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ Virtual networks enable you to specify which endpoints can make API calls to you
167167
168168
### Network rules
169169
170-
All accounts and their corresponding projects are protected by default with **deny-by-default network rules**, requiring explicit configuration to allow access through private endpoints.
170+
All accounts and their corresponding projects are protected by default with **Public network access Disabled flag**, requiring explicit configuration to allow access through private endpoints.
171171
172172
These rules apply to **all protocols**, including REST and WebSocket. Even internal testing tools like Azure portal's test consoles require explicit permission to access your account and its child resources—ensuring complete security across all agent projects.
173173

articles/ai-foundry/agents/overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: What is Azure AI Foundry Agent Service?
33
titleSuffix: Azure AI Foundry
44
description: Learn how to create agents that apply advanced language models for workflow automation.
@@ -102,7 +102,7 @@ Azure AI Foundry Agent Service provides a production-ready foundation for deploy
102102
| **1. Visibility into conversations** | Full access to structured [threads](./concepts/threads-runs-messages.md#threads), including both user↔agent and agent↔agent messages. Ideal for UIs, debugging, and training |
103103
| **2. Multi-agent coordination** | Built-in support for agent-to-agent messaging. |
104104
| **3. Tool orchestration** | Server-side execution and retry of [tool calls](how-to\tools\overview.md) with structured logging. No manual orchestration required. |
105-
| **4. Trust and safety** | Integrated [content filters](../../ai-services/openai/how-to/content-filters.md) help prevent misuse and mitigate prompt injection risks (XPIA). all outputs are policy-governed. |
105+
| **4. Trust and safety** | Integrated [content filters](../openai/how-to/content-filters.md) help prevent misuse and mitigate prompt injection risks (XPIA). all outputs are policy-governed. |
106106
| **5. Enterprise integration** | Bring your own [storage](./how-to/use-your-own-resources.md#use-an-existing-azure-cosmos-db-for-nosql-account-for-thread-storage), [Azure AI Search index](./how-to/use-your-own-resources.md#use-an-existing-azure-ai-search-resource), and [virtual network](how-to\virtual-networks.md) to meet compliance needs. |
107107
| **6. Observability and debugging** | Threads, tool invocations, and message traces are [fully traceable](concepts\tracing.md); [Application Insights integration](./how-to/metrics.md) for telemetry |
108108
| **7. Identity and policy control** | Built on Microsoft Entra with full support for RBAC, audit logs, and enterprise conditional access. |

articles/ai-foundry/agents/quotas-limits.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: Quotas and limits for Azure AI Foundry Agent Service
33
titleSuffix: Azure AI Foundry
44
description: Learn about the quotas and limits for when you use Azure AI Foundry Agent Service.
@@ -30,7 +30,7 @@ The 2,000,000 agent limit refers to the maximum number of distinct Agent resourc
3030

3131
## Quotas and limits for Azure OpenAI models
3232

33-
See the [Azure OpenAI](../../ai-services/openai/quotas-limits.md) for current quotas and limits for the Azure OpenAI models that you can use with Azure AI Foundry Agent Service.
33+
See the [Azure OpenAI](../openai/quotas-limits.md) for current quotas and limits for the Azure OpenAI models that you can use with Azure AI Foundry Agent Service.
3434

3535
## Next steps
3636

articles/ai-foundry/concepts/ai-red-teaming-agent.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: AI Red Teaming Agent
33
titleSuffix: Azure AI Foundry
44
description: This article provides conceptual overview of the AI Red Teaming Agent.
@@ -47,7 +47,7 @@ We encourage teams to use the AI Red Teaming Agent to run automated scans throug
4747
- Development: Upgrading models within your application or creating fine-tuned models for your specific application.
4848
- Pre-deployment: Before deploying GenAI applications to productions.
4949

50-
In production, we recommend implementing **safety mitigations** such as [Azure AI Content Safety filters](../../ai-services/content-safety/overview.md) or implementing safety system messages using our [templates](../../ai-services/openai/concepts/safety-system-message-templates.md).
50+
In production, we recommend implementing **safety mitigations** such as [Azure AI Content Safety filters](../../ai-services/content-safety/overview.md) or implementing safety system messages using our [templates](../openai/concepts/safety-system-message-templates.md).
5151

5252
## How AI Red Teaming works
5353

@@ -109,6 +109,6 @@ Learn more about the tools leveraged by the AI Red Teaming Agent.
109109

110110
The most effective strategies for risk assessment we’ve seen leverage automated tools to surface potential risks, which are then analyzed by expert human teams for deeper insights. If your organization is just starting with AI red teaming, we encourage you to explore the resources created by our own AI red team at Microsoft to help you get started.
111111

112-
- [Planning red teaming for large language models (LLMs) and their applications](../../ai-services/openai/concepts/red-teaming.md)
112+
- [Planning red teaming for large language models (LLMs) and their applications](../openai/concepts/red-teaming.md)
113113
- [Three takeaways from red teaming 100 generative AI products](https://www.microsoft.com/security/blog/2025/01/13/3-takeaways-from-red-teaming-100-generative-ai-products/)
114114
- [Microsoft AI Red Team building future of safer AI](https://www.microsoft.com/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/)

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: Azure AI Foundry content filtering
33
titleSuffix: Azure AI Foundry
44
description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI Foundry portal.
@@ -20,7 +20,7 @@ author: PatrickFarley
2020
[Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) includes a content filtering system that works alongside core models and image generation models.
2121

2222
> [!IMPORTANT]
23-
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../../ai-services/openai/concepts/models.md).
23+
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../openai/concepts/models.md).
2424
2525
## How it works
2626

@@ -73,12 +73,12 @@ You can also enable the following special output filters:
7373

7474
### Configurability (preview)
7575

76-
[!INCLUDE [content-filter-configurability](../../ai-services/openai/includes/content-filter-configurability.md)]
76+
[!INCLUDE [content-filter-configurability](../openai/includes/content-filter-configurability.md)]
7777

7878

7979
## Related content
8080

81-
- Learn more about the [underlying models that power Azure OpenAI](../../ai-services/openai/concepts/models.md).
81+
- Learn more about the [underlying models that power Azure OpenAI](../openai/concepts/models.md).
8282
- Azure AI Foundry content filtering is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md).
8383
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview).
8484
- Learn more about evaluating your generative AI models and AI systems via [Azure AI Evaluation](https://aka.ms/genaiopsevals).

articles/ai-foundry/concepts/fine-tuning-overview.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
---
22
title: Fine-tune models with Azure AI Foundry
33
titleSuffix: Azure AI Foundry
44
description: This article explains what fine-tuning is and under what circumstances you should consider doing it.
@@ -39,7 +39,7 @@ Before picking a model, it's important to select the fine-tuning product that ma
3939

4040
For most customers, serverless provides the best balance of ease-of-use, cost efficiency, and access to premium models. This document focuses on serverless options.
4141

42-
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md) or [Fine-tune models using managed compute](../how-to/fine-tune-managed-compute.md). For detailed guidance on OpenAI fine-tuning see [Fine-tune Azure OpenAI Models](../../ai-services/openai/how-to/fine-tuning.md).
42+
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md) or [Fine-tune models using managed compute](../how-to/fine-tune-managed-compute.md). For detailed guidance on OpenAI fine-tuning see [Fine-tune Azure OpenAI Models](../openai/how-to/fine-tuning.md).
4343

4444
## Training Techniques
4545

@@ -90,21 +90,21 @@ This table provides an overview of the models available
9090
3. **Choose your technique:** Begin with Supervised Fine-Tuning (SFT) unless you have specific requirements for reasoning models / RFT.
9191
4. **Iterate and evaluate:** Fine-tuning is an iterative process—start with a baseline, measure performance, and refine your approach based on results.
9292

93-
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md), [Fine-tune Azure OpenAI Models](../../ai-services/openai/how-to/fine-tuning.md), or [Fine-tune models using managed compute](../how-to/fine-tune-managed-compute.md).
93+
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md), [Fine-tune Azure OpenAI Models](../openai/how-to/fine-tuning.md), or [Fine-tune models using managed compute](../how-to/fine-tune-managed-compute.md).
9494

9595
## Fine-Tuning Availability
9696

9797
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune.
9898

9999
**To fine-tune an AI Foundry model using Serverless** you must have a hub/project in the region where the model is available for fine tuning. See [Region availability for models in serverless API deployment](../how-to/deploy-models-serverless-availability.md) for detailed information on model and region availability, and [How to Create a Hub based project](../how-to/create-projects.md) to create your project.
100100

101-
**To fine-tune an OpenAI model** you can use an Azure OpenAI Resource, a Foundry resource or default project, or a hub/project. GPT 4.1, 4.1-mini and 4.1-nano are available in all regions with Global Training. For regional availability, see [Regional Availability and Limits for Azure OpenAI Fine Tuning](../../ai-services/openai/concepts/models.md). See [Create a project for Azure AI Foundry](../how-to/create-projects.md) for instructions on creating a new project.
101+
**To fine-tune an OpenAI model** you can use an Azure OpenAI Resource, a Foundry resource or default project, or a hub/project. GPT 4.1, 4.1-mini and 4.1-nano are available in all regions with Global Training. For regional availability, see [Regional Availability and Limits for Azure OpenAI Fine Tuning](../openai/concepts/models.md). See [Create a project for Azure AI Foundry](../how-to/create-projects.md) for instructions on creating a new project.
102102

103103
**To fine-tune a model using Managed Compute** you must have a hub/project and available VM quota for training and inferencing. See [Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md) for more details on how to use managed compute fine tuning, and [How to Create a Hub based project](../how-to/create-projects.md) to create your project.
104104

105105

106106
## Related content
107107

108108
- [Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md)
109-
- [Fine-tune an Azure OpenAI model in Azure AI Foundry portal](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
109+
- [Fine-tune an Azure OpenAI model in Azure AI Foundry portal](../openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
110110
- [Fine-tune models using serverless API deployment](../how-to/fine-tune-serverless.md)

0 commit comments

Comments
 (0)