Skip to content

Commit 9e5ba29

Browse files
authored
Merge branch 'MicrosoftDocs:main' into heidist-rag
2 parents d057a03 + 27ff02a commit 9e5ba29

File tree

126 files changed

+2419
-2590
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

126 files changed

+2419
-2590
lines changed

.github/policies/disallow-edits.yml

Lines changed: 6 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -19,18 +19,14 @@ configuration:
1919
@${issueAuthor} - You tried to add an index file to this repository; this is not permitted so your pull request will be closed automatically.
2020
- closePullRequest
2121

22-
- description: Close PRs to the "personalizer" and "responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
22+
- description: Close PRs to the "responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
2323
if:
2424
- payloadType: Pull_Request
2525
- isAction:
2626
action: Opened
27-
- or:
28-
- filesMatchPattern:
29-
matchAny: true
30-
pattern: articles/ai-foundry/responsible-ai/*
31-
- filesMatchPattern:
32-
matchAny: true
33-
pattern: articles/ai-services/personalizer/*
27+
- filesMatchPattern:
28+
matchAny: true
29+
pattern: articles/ai-foundry/responsible-ai/*
3430
- not:
3531
activitySenderHasAssociation:
3632
association: Member
@@ -40,50 +36,15 @@ configuration:
4036
@${issueAuthor} - Pull requests that modify files in this folder aren't accepted from public contributors.
4137
- closePullRequest
4238

43-
- description: \@mention specific people when a PR is opened in the "personalizer" or "responsible-ai" folder.
44-
if:
45-
- payloadType: Pull_Request
46-
- isAction:
47-
action: Opened
48-
- or:
49-
- filesMatchPattern:
50-
matchAny: true
51-
pattern: articles/ai-foundry/responsible-ai/*
52-
- filesMatchPattern:
53-
matchAny: true
54-
pattern: articles/ai-services/personalizer/*
55-
- activitySenderHasAssociation:
56-
association: Member
57-
- not:
58-
or:
59-
- isActivitySender:
60-
user: eric-urban
61-
- isActivitySender:
62-
user: nitinme
63-
- isActivitySender:
64-
user: mrbullwinkle
65-
then:
66-
- addReply:
67-
reply: >-
68-
@${issueAuthor} - Please don't sign off on this PR. The area owners will sign off once they've reviewed your contribution.
69-
- mentionUsers:
70-
mentionees:
71-
- eric-urban
72-
- nitinme
73-
- mrbullwinkle
74-
replyTemplate: ${mentionees} - Please review this PR and sign off when you're ready to merge it.
75-
assignMentionees: True # This part probably won't work since the bot doesn't have write perms.
76-
- addLabel:
77-
label: needs-human-review
7839

79-
- description: \@mention specific people when a PR is opened in the "ai-services/responsible-ai" folder.
40+
- description: \@mention specific people when a PR is opened in the "ai-foundry/responsible-ai" folder.
8041
if:
8142
- payloadType: Pull_Request
8243
- isAction:
8344
action: Opened
8445
- filesMatchPattern:
8546
matchAny: true
86-
pattern: articles/ai-services/responsible-ai/*
47+
pattern: articles/ai-foundry/responsible-ai/*
8748
- activitySenderHasAssociation:
8849
association: Member
8950
- not:
@@ -100,8 +61,6 @@ configuration:
10061
user: laujan
10162
- isActivitySender:
10263
user: patrickfarley
103-
- isActivitySender:
104-
user: jboback
10564
- isActivitySender:
10665
user: heidisteen
10766
- isActivitySender:
@@ -118,7 +77,6 @@ configuration:
11877
- aahill
11978
- laujan
12079
- patrickfarley
121-
- jboback
12280
- heidisteen
12381
- haileytap
12482
replyTemplate: ${mentionees} - Please review this PR and sign off when you're ready to merge it.

.openpublishing.publish.config.json

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,12 @@
188188
"branch": "main",
189189
"branch_mapping": {}
190190
},
191+
{
192+
"path_to_root": "azure-search-dotnet-samples",
193+
"url": "https://github.com/Azure-Samples/azure-search-dotnet-samples",
194+
"branch": "main",
195+
"branch_mapping": {}
196+
},
191197
{
192198
"path_to_root": "azureai-model-inference-bicep",
193199
"url": "https://github.com/Azure-Samples/azureai-model-inference-bicep",
@@ -199,6 +205,12 @@
199205
"url": "https://github.com/MicrosoftDocs/azure-docs-pr",
200206
"branch": "main",
201207
"branch_mapping": {}
208+
},
209+
{
210+
"path_to_root": "azure-policy-autogen-docs",
211+
"url": "https://github.com/MicrosoftDocs/azure-policy-autogen-docs",
212+
"branch": "main",
213+
"branch_mapping": {}
202214
}
203215
],
204216
"branch_target_mapping": {},

.openpublishing.redirection.json

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -460,6 +460,11 @@
460460
"redirect_url": "../foundry-models/overview",
461461
"redirect_document_id": false
462462
},
463+
{
464+
"source_path": "articles/ai-foundry/concepts/concept-model-distillation.md",
465+
"redirect_url": "../openai/how-to/stored-completions",
466+
"redirect_document_id": false
467+
},
463468
{
464469
"source_path": "articles/ai-foundry/model-inference/quotas-limits.md",
465470
"redirect_url": "../foundry-models/quotas-limits",
@@ -486,4 +491,4 @@
486491
"redirect_document_id": false
487492
}
488493
]
489-
}
494+
}

articles/ai-foundry/.openpublishing.redirection.ai-studio.json

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -450,9 +450,19 @@
450450
"redirect_url": "/azure/ai-foundry/how-to/troubleshoot-secure-connection-project",
451451
"redirect_document_id": true
452452
},
453+
{
454+
"source_path_from_root": "/articles/ai-foundry/foundry-models/supported-languages-openai.md",
455+
"redirect_url": "/azure/ai-services/openai/supported-languages",
456+
"redirect_document_id": false
457+
},
453458
{
454459
"source_path_from_root": "/articles/ai-studio/how-to/use-blocklists.md",
455-
"redirect_url": "/azure/ai-foundry/how-to/use-blocklists",
460+
"redirect_url": "/azure/ai-foundry/foundry-models/how-to/use-blocklists",
461+
"redirect_document_id": false
462+
},
463+
{
464+
"source_path_from_root": "/articles/ai-foundry/how-to/use-blocklists.md",
465+
"redirect_url": "/azure/ai-foundry/foundry-models/how-to/use-blocklists",
456466
"redirect_document_id": true
457467
},
458468
{
@@ -772,6 +782,16 @@
772782
"redirect_url": "/azure/ai-foundry/context/context.json",
773783
"redirect_document_id": false
774784
},
785+
{
786+
"source_path_from_root": "/articles/ai-foundry/foundry-models/index.yml",
787+
"redirect_url": "/azure/ai-foundry/",
788+
"redirect_document_id": false
789+
},
790+
{
791+
"source_path_from_root": "/articles/ai-foundry/foundry-models/context/context.yml",
792+
"redirect_url": "/azure/ai-foundry/context/context",
793+
"redirect_document_id": false
794+
},
775795
{
776796
"source_path_from_root": "/articles/ai-studio/responsible-use-of-ai-overview.md",
777797
"redirect_url": "/azure/ai-foundry/responsible-use-of-ai-overview",

articles/ai-foundry/agents/how-to/tools/deep-research-samples.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ Use this article to learn how to use the Deep Research tool with the Azure AI Pr
1717

1818
## Prerequisites
1919

20+
> [!NOTE]
21+
> * The `o3-deep-research` model and the GPT model deployments should be part of your AI Foundry project resulting in all three resources in the same Azure subscription and same region. Supported regions are **West US** and **Norway East**.
22+
2023
* The requirements in the [Deep Research overview](./deep-research.md).
2124
* The Deep Research tool requires the latest prerelease versions of the `azure-ai-projects` library. You can install it with the following command:
2225

articles/ai-foundry/agents/how-to/tools/deep-research.md

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -35,24 +35,29 @@ The deep research tool is a **code-only release** and available for use using th
3535
## Knowledge source support
3636
The deep research tool is tightly integrated with Grounding with Bing Search and only supports web-based research.
3737

38-
## How Deep Research works
39-
40-
At its core, the Deep Research tool orchestrates a multi-step research pipeline uses the Azure OpenAI's `o3-deep-research` model together with Grounding with Bing Search to autonomously search for and read information from multiple online sources appropriate to the user prompt. This enables the Deep Research tool to generate thorough, documented, and cited reports on complex topics.
38+
## Region support
39+
The Deep Research tool is supported in the following regions where the deep research model is available for deployment.
4140

42-
### Deep research model deployment
41+
|West US | Norway East |
42+
|---------|---------|
43+
| ✔️ | ✔️ |
4344

44-
The Deep Research tool uses the Azure OpenAI `o3-deep-research` model for its research tasks. The model was fine-tuned on the Azure OpenAI `o3` reasoning model.
45+
## Deep research model
46+
The Deep Research tool uses the Azure OpenAI `o3-deep-research` model for its research tasks. The model was fine-tuned on the Azure OpenAI `o3` reasoning model. Key features:
4547

46-
**Key features**:
4748
- Handles data as part of its research tasks.
4849
- 200-K context length, 100-K completion tokens, and May 31, 2024 knowledge cutoff.
4950
- Outputs its thinking as reasoning summary as it analyzes information.
5051
- Delivers a synthesized report at the end of the research task.
5152

52-
**Deployment information**:
53-
- Deployment type: Global Standard
54-
- Available regions: West US, Norway East
55-
- Quotas and limits: Enterprise: `30K RPS / 30M TPM`, Default: `3K RPS / 3M TPM`
53+
### Deployment information
54+
- **Deployment type**: Global Standard
55+
- **Available regions**: West US, Norway East
56+
- **Quotas and limits**: Enterprise: `30K RPS / 30M TPM`, Default: `3K RPS / 3M TPM`
57+
58+
## How Deep Research works
59+
60+
At its core, the Deep Research tool orchestrates a multi-step research pipeline uses the Azure OpenAI's `o3-deep-research` model and any GPT model together with Grounding with Bing Search to autonomously search for and read information from multiple online sources appropriate to the user prompt. This enables the Deep Research tool to generate thorough, documented, and cited reports on complex topics.
5661

5762
### GPT model deployment for clarifying intent
5863

@@ -74,16 +79,19 @@ The output is a structured report that documents not only the comprehensive answ
7479

7580
## Prerequisites
7681
- If you already have access to the Azure OpenAI `o3` model, no request is required to access the `o3-deep-research` model. Otherwise, fill out the [request form](https://aka.ms/OAI/deepresearchaccess).
77-
- An Azure subscription with the ability to create resources [Set up your environment](../../environment-setup.md)
82+
- An Azure subscription with the ability to create AI Foundry project, Grounding with Bing Search, deep research model and GPT model resources [Set up your environment](../../environment-setup.md) in the **West US** and **Norway East** regions.
7883
- [Grounding with Bing Search tool](./bing-grounding.md) resource for connecting to your Azure AI Foundry project.
7984
- [Model deployments](../../../model-inference/how-to/create-model-deployments.md) for the following models
8085
- `o3-deep-research` version `2025-06-26`. This model is available in `West US` and `Norway East`.
81-
- Any Azure OpenAI GPT model like `gpt-4o` for intent clarification.
86+
- Any Azure OpenAI GPT model like `gpt-4o` for intent clarification. Deploy in the same regions.
8287

8388
## Setup
8489

8590
To use the Deep Research tool, you need to create the Azure AI Foundry type project, add your Grounding with Bing Search resource as a new connection, deploy the `o3-deep-research-model`, and deploy the selected Azure OpenAI GPT model.
8691

92+
> [!NOTE]
93+
> * The `o3-deep-research` model and the GPT model deployments should be part of your AI Foundry project resulting in all three resources in the same Azure subscription and same region. Supported regions are **West US** and **Norway East**.
94+
8795
:::image type="content" source="../../media/tools/deep-research/setup-deep-research-tool.png" alt-text="A diagram of the steps to set up the deep research tool." lightbox="../../media/tools/deep-research/setup-deep-research-tool.png":::
8896

8997
1. Navigate to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) and create a new project.

articles/ai-foundry/agents/how-to/tools/model-context-protocol-samples.md

Lines changed: 67 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.custom: azure-ai-agents-code
1717

1818
Use this article to find step-by-step instructions and code samples for connecting Foundry Agent service with MCP.
1919

20-
Follow the [REST API Quickstart](../../quickstart.md?pivots=rest-api#api-call-information) to set the right values for the environment variables `AGENT_TOKEN`, `AZURE_AI_FOUNDRY_PROJECT_ENDPOINT` and `API_VERSION`.
20+
Follow the [REST API Quickstart](../../quickstart.md?pivots=rest-api#api-call-information) to set the right values for the environment variables `AGENT_TOKEN`, `AZURE_AI_FOUNDRY_PROJECT_ENDPOINT`, and `API_VERSION`.
2121

2222

2323
## Create an Agent with the MCP tool enabled
@@ -35,7 +35,7 @@ curl --request POST \
3535
"type": "mcp",
3636
"server_label": "<unique name for your MCP server>",
3737
"server_url": "<your MCP server url>",
38-
"require_approval": "never",
38+
"allowed_tools": ["<tool_name>"], # optional
3939
}
4040
],
4141
"name": "my-assistant",
@@ -68,7 +68,13 @@ curl --request POST \
6868

6969
## Create a run and check the output
7070

71-
Create a run to pass headers for the tool and observe that the model uses the Grounding with Bing Search tool to provide a response to the user's question.
71+
Create a run to pass headers for the tool and observe that the model uses the Grounding with Bing Search tool to provide a response to the user's question.
72+
`require_approval` parameter is optional. If not provided, `always` is the default value, meaning each time developer needs to approve before calling. Supported values:
73+
74+
- `always` by default
75+
- `never` meaning no approval is required
76+
- `{"never":[<tool_name_1>, <tool_name_2>]}` you can also provide a list of tools without required approval
77+
- `{"always":[<tool_name_1>, <tool_name_2>]}` you can provide a list of tools with required approval
7278

7379
```bash
7480
curl --request POST \
@@ -81,9 +87,11 @@ curl --request POST \
8187
"mcp": [
8288
{
8389
"server_label": "<the same unique name you provided during agent creation>",
90+
"require_approval": "always" #always by default
8491
"headers": {
8592
"Authorization": "Bearer <token>",
8693
}
94+
8795
}
8896
]
8997
},
@@ -97,6 +105,62 @@ curl --request GET \
97105
-H "Authorization: Bearer $AGENT_TOKEN"
98106
```
99107

108+
If the model is trying to invoke a tool in your MCP server with approval required, you get a run with `require_action` status.
109+
```bash
110+
{
111+
"id": "run_123",
112+
"object": "thread.run",
113+
...
114+
"status": "requires_action",
115+
...
116+
"required_action": {
117+
"type": "submit_tool_approval",
118+
"submit_tool_approval": {
119+
"tool_calls": [
120+
{
121+
"id": "call_123",
122+
"type": "mcp",
123+
"arguments": "{...}",
124+
"name": "<tool_name>",
125+
"server_label": "<server_label_you_provided>"
126+
}
127+
]
128+
}
129+
},
130+
...
131+
"tools": [
132+
{
133+
"type": "mcp",
134+
"server_label": "<server_label_you_provided>",
135+
"server_url": "<server_url_you_provided>",
136+
"allowed_tools": null
137+
}
138+
],
139+
...
140+
}
141+
```
142+
Make sure you carefully reviewed the tool and argument(s) to be passed and make an informed decision for approval.
143+
144+
## Submit your approval
145+
If you decide to approve, you need to set the `approve` parameter to be `true` with the `id` for the preceding tool calls.
146+
```bash
147+
curl --request POST \
148+
--url $AZURE_AI_FOUNDRY_PROJECT_ENDPOINT/threads/thread_abc123/runs/run_abc123/submit_tool_outputs?api-version=$API_VERSION \
149+
-H "Authorization: Bearer $AGENT_TOKEN" \
150+
-H "Content-Type: application/json" \
151+
-d '{
152+
"tool_approvals": [
153+
{
154+
"tool_call_id": "call_abc123",
155+
"approve": true,
156+
"headers": {
157+
}
158+
}
159+
]
160+
}
161+
```
162+
163+
100164
## Retrieve the agent response
101165
102166
```bash

articles/ai-foundry/agents/how-to/tools/model-context-protocol.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ You can bring multiple remote MCP servers to Foundry Agent service by adding the
2727

2828
> [!Note]
2929
> * You need to bring a remote MCP server (an existing MCP server endpoint)
30-
> * With current MCP tool in Foundry Agent, explicit approval is not supported (only `never` is accepted for `require_approval` parameter). Please review carefully what MCP server(s) you added to Foundry Agent service. We recommend reviewing all data being shared with remote MCP servers and optionally logging it for auditing purposes.
30+
> * With current MCP tool in Foundry Agent, approval is required by default. Please review carefully what MCP server(s) you added to Foundry Agent service. We recommend reviewing all data being shared with remote MCP servers and optionally logging it for auditing purposes.
3131
> * Supported regions: `westus`, `westus2`, `uaenorth`, `southindia` and `switzerlandnorth`
3232
> * The MCP tool supports custom headers for a specific run, allowing you to pass headers as needed by MCP server, such as authentication schema. Headers you pass in will only be available for the current run and will not be persisted.
3333
@@ -43,11 +43,20 @@ You can bring multiple remote MCP servers to Foundry Agent service by adding the
4343
1. Find the remote MCP server you want to connect to, such as GitHub MCP Server. Create or update a Foundry Agent with a `mcp` tool with the following information:
4444
1. `server_url`: the url of the MCP server, for example, `https://api.githubcopilot.com/mcp/`
4545
2. `server_label`: a unique identifier of this MCP server to the agent, for example, `github`
46-
3. `require_approval`: only `never` is supported right now
46+
3. `allowed_tools`: optional, a list of tools you want to allow without approval
4747

4848
1. Create a run and pass additional information about the `mcp` tool in `tool_resources` with headers
4949
1. `tool_label`: use the identifier you provided during create/update agent
5050
2. `headers`: pass a set of headers required by the MCP server
51+
3. `require_approval`: optional, if not provided, `always` is the default value, meaning each time developer needs to approve before calling. Supported values:
52+
1. `always` by default
53+
2. `never` meaning no approval is required
54+
3. `{"never":[<tool_name_1>, <tool_name_2>]}` you can also provide a list of tools without required approval
55+
4. `{"always":[<tool_name_1>, <tool_name_2>]}` you can provide a list of tools with required approval
56+
57+
1. If the model is trying to invoke a tool in your MCP server with approval required, you will get Run status as `require_action`. Within `require_action` field, you can get more details on which tool in MCP server to be called, argument(s) to be passed and `call_id`. Make sur eyou review the tool, argument(s) and make an informed decision for approval.
58+
59+
1. Submit your approval to the agent with `call_id` by setting `approve` to `true.
5160

5261
## Next steps
5362

0 commit comments

Comments
 (0)