Skip to content

Commit 5f4e67b

Browse files
Merge pull request #8763 from MicrosoftDocs/main
Auto Publish – main to live - 2025-11-21 18:12 UTC
2 parents 8728ab3 + e9725b5 commit 5f4e67b

28 files changed

+97
-90
lines changed

.openpublishing.redirection.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -302,12 +302,12 @@
302302
},
303303
{
304304
"source_path": "articles/ai-foundry/openai/concepts/models.md",
305-
"redirect_url": "/azure/ai-foundry/foundry-models/concepts/models-sold-directly-by-azure?pivots=azure-openai",
305+
"redirect_url": "/azure/ai-foundry/foundry-models/concepts/models-sold-directly-by-azure",
306306
"redirect_document_id": false
307307
},
308308
{
309309
"source_path": "articles/ai-foundry/foundry-models/concepts/models.md",
310-
"redirect_url": "/azure/ai-foundry/foundry-models/concepts/models-sold-directly-by-azure?pivots=azure-direct-others",
310+
"redirect_url": "/azure/ai-foundry/foundry-models/concepts/models-sold-directly-by-azure",
311311
"redirect_document_id": false
312312
},
313313
{

articles/ai-foundry/agents/quickstart.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: azure-ai-foundry
99
ms.subservice: azure-ai-foundry-agent-service
1010
ms.topic: quickstart
11-
ms.date: 09/25/2025
11+
ms.date: 11/21/2025
1212
ms.custom:
1313
- azure-ai-agents
1414
- build-2025
@@ -17,6 +17,9 @@ zone_pivot_groups: agents-quickstart
1717

1818
# Quickstart: Create a new agent
1919

20+
> [!NOTE]
21+
> This quickstart is for the previous version of agents. See the [**quickstart for Microsoft Foundry**](../quickstarts/get-started-code.md?view=foundry&preserve-view=true) to use the new version of the API.
22+
2023
Foundry Agent Service allows you to create AI agents tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions.
2124

2225
::: zone pivot="ai-foundry-portal"

articles/ai-foundry/concepts/evaluation-evaluators/general-purpose-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -315,7 +315,7 @@ if __name__ == "__main__":
315315
main()
316316
```
317317

318-
For more details, see the [complete working sample.](https://github.com/Azure/azure-sdk-for-python/blob/evaluation_samples_graders/sdk/ai/azure-ai-projects/samples/evaluation/agentic_evaluators/sample_coherence.py)
318+
For more details, see the [complete working sample](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/samples/evaluations/agentic_evaluators/sample_coherence.py).
319319

320320
::: moniker-end
321321

articles/ai-foundry/default/agents/how-to/tools/knowledge-retrieval.md

Lines changed: 24 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,25 @@ Content-Type: application/json
141141

142142
---
143143

144+
## Optimize agent instructions for knowledge base retrieval
145+
146+
To maximize the accuracy of knowledge base invocations and ensure proper citation formatting, use optimized agent instructions. Based on our experiments, we recommend the following instruction template as a starting point:
147+
148+
```plaintext
149+
You are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.
150+
Every answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`
151+
If you cannot find the answer in the provided knowledge base you must respond with "I don't know".
152+
"""
153+
```
154+
155+
This instruction template optimizes for:
156+
157+
+ **Higher MCP tool invocation rates**: Explicit directives ensure the agent consistently calls the knowledge base tool rather than relying on its training data.
158+
+ **Proper annotation formatting**: The specified citation format ensures the agent includes provenance information in responses, making it clear which knowledge sources were used.
159+
160+
> [!TIP]
161+
> While this template provides a strong foundation, we recommend evaluating and iterating on the instructions based on your specific use case and the tasks you're trying to accomplish. Test different variations to find what works best for your scenarios.
162+
144163
## Create an agent with the MCP tool
145164

146165
The next step is to create an agent that integrates the knowledge base as an MCP tool. The agent uses a system prompt to instruct when and how to call the knowledge base, follows instructions on how to answer questions, and automatically maintains its tool configuration and settings across conversation sessions.
@@ -168,11 +187,11 @@ agent_model = "{deployed_LLM}" # e.g. gpt-4.1-mini
168187
# Create project client
169188
project_client = AIProjectClient(endpoint = project_endpoint, credential = credential)
170189

171-
# Define agent instructions
190+
# Define agent instructions (see "Optimize agent instructions" section for guidance)
172191
instructions = """
173-
A Q&A agent that can answer questions based on the attached knowledge base.
174-
Always provide references to the ID of the data source used to answer the question.
175-
If you don't have the answer, respond with "I don't know".
192+
You are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.
193+
Every answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`
194+
If you cannot find the answer in the provided knowledge base you must respond with "I don't know".
176195
"""
177196

178197
# Create MCP tool with knowledge base connection
@@ -215,7 +234,7 @@ Content-Type: application/json
215234
{
216235
"definition": {
217236
"model": "{deployed_llm}",
218-
"instructions": "\nA Q&A agent that can answer questions based on the attached knowledge base.\nAlways provide references to the ID of the data source used to answer the question.\nIf you don't have the answer, respond with \"I don't know\".\n",
237+
"instructions": "\nYou are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.\nEvery answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`\nIf you cannot find the answer in the provided knowledge base you must respond with \"I don't know\".\n",
219238
"tools": [
220239
{
221240
"server_label": "knowledge-base",

articles/ai-foundry/default/includes/agent-v2-switch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,4 @@ ms.custom: include
1111
---
1212

1313
> [!TIP]
14-
> Code uses **Agents v2 API (preview)** and is incompatible with Agents v1. [Switch to (classic)](?view=foundry-classic&preserve-view=true) for the Agents v1 API version.
14+
> Code uses **Agents v2 API (preview)** and is incompatible with Agents v1. [Switch to Foundry (classic) documentation](?view=foundry-classic&preserve-view=true) for the Agents v1 API version.

articles/ai-foundry/includes/agent-v1-switch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,4 @@ ms.custom: include
1111
---
1212

1313
> [!TIP]
14-
> Code uses **Agents v1 API** and is incompatible with Agents v2 (preview). [Switch to (new)](?view=foundry&preserve-view=true) for the Agents v2 API (preview) version.
14+
> Code uses **Agents v1 API** and is incompatible with Agents v2 (preview). [Switch to Foundry (new) documentation](?view=foundry&preserve-view=true) for the Agents v2 API (preview) version.

articles/ai-foundry/openai/how-to/responses.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1057,6 +1057,11 @@ response = client.responses.create(
10571057

10581058
### Authentication
10591059

1060+
> [!IMPORTANT]
1061+
> - The MCP client within the Responses API requires TLS 1.2 or greater.
1062+
> - mutual TLS (mTLS) is currently not supported.
1063+
> - [Azure service tags](/azure/virtual-network/service-tags-overview) are currently not supported for MCP client traffic.
1064+
10601065
Unlike the GitHub MCP server, most remote MCP servers require authentication. The MCP tool in the Responses API supports custom headers, allowing you to securely connect to these servers using the authentication scheme they require.
10611066

10621067
You can specify headers such as API keys, OAuth access tokens, or other credentials directly in your request. The most commonly used header is the `Authorization` header.

articles/search/agentic-retrieval-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ Currently, Azure portal support for agentic retrieval is limited to the 2025-08-
131131
+ [Quickstart-Agentic-Retrieval: Python](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Agentic-Retrieval)
132132
+ [Quickstart-Agentic-Retrieval: .NET](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/quickstart-agentic-retrieval)
133133
+ [Quickstart-Agentic-Retrieval: REST](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-agentic-retrieval)
134-
+ [End-to-end with Azure AI Search and Azure AI Agent Service](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example)
134+
+ [End-to-end with Azure AI Search and Foundry Agent Service](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example)
135135

136136
### [**REST API references**](#tab/rest-api-references)
137137

articles/search/cognitive-search-concept-image-scenarios.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: HeidiSteen
66
ms.author: heidist
77
ms.service: azure-ai-search
88
ms.topic: how-to
9-
ms.date: 05/01/2025
9+
ms.date: 11/21/2025
1010
ms.update-cycle: 180-days
1111
ms.custom:
1212
- devx-track-csharp
@@ -15,16 +15,16 @@ ms.custom:
1515

1616
# Extract text and information from images by using AI enrichment
1717

18-
Images often contain useful information that's relevant in search scenarios. You can [vectorize images](search-get-started-portal-image-search.md) to represent visual content in your search index. Or, you can use [AI enrichment and skillsets](cognitive-search-concept-intro.md) to create and extract searchable *text* from images, including:
18+
Images often contain useful information that's relevant in search scenarios. Azure AI Search doesn't query image content in real time, but you can extract information about an image during indexing and make that content searchable. To represent images in a search index, you can use these approaches:
1919

20-
+ [GenAI Prompt](cognitive-search-skill-genai-prompt.md) to pass a prompt to a chat completion skill, requesting a description of image content.
21-
+ [OCR](cognitive-search-skill-ocr.md) for optical character recognition of text and digits
22-
+ [Image Analysis](cognitive-search-skill-image-analysis.md) that describes images through visual features
23-
+ [Custom skills](#passing-images-to-custom-skills) to invoke any external image processing that you want to provide
20+
+ [Vectorize images](search-get-started-portal-image-search.md) to represent visual content as a searchable vector.
21+
+ [Verbalize images](cognitive-search-skill-genai-prompt.md) using the GenAI Prompt skill that sends a verbalization request to a chat completion model to describe the image.
22+
+ [Analyze images](cognitive-search-skill-image-analysis.md) using an image analysis skill to generate a text representation of an image, such as *dandelion* for a photo of a dandelion, or the color *yellow*. You can also extract metadata about the image, such as its size.
23+
+ [Use OCR](cognitive-search-skill-ocr.md) to extract text and from photos or pictures, such as the word *STOP* in a stop sign.
2424

25-
By using OCR, you can extract text and from photos or pictures, such as the word *STOP* in a stop sign. Through image analysis, you can generate a text representation of an image, such as *dandelion* for a photo of a dandelion, or the color *yellow*. You can also extract metadata about the image, such as its size.
25+
You can also create a [custom skill](#passing-images-to-custom-skills) to invoke any external image processing that you want to provide.
2626

27-
This article covers the fundamentals of working with images in skillsets, and also describes several common scenarios, such as working with embedded images, custom skills, and overlaying visualizations on original images.
27+
This article focuses on image analysis and OCR, custom skills that provide external processing, working with embedded images, and overlaying visualizations on original images. If verbalization or vectorization is your preferred approach, see [Multimodal search](multimodal-search-overview.md) instead.
2828

2929
To work with image content in a skillset, you need:
3030

@@ -102,7 +102,7 @@ Metadata adjustments are captured in a complex type created for each image. You
102102

103103
The default of 2,000 pixels for the normalized images maximum width and height is based on the maximum sizes supported by the [OCR skill](cognitive-search-skill-ocr.md) and the [image analysis skill](cognitive-search-skill-image-analysis.md). The [OCR skill](cognitive-search-skill-ocr.md) supports a maximum width and height of 4,200 for non-English languages, and 10,000 for English. If you increase the maximum limits, processing could fail on larger images depending on your skillset definition and the language of the documents.
104104

105-
1. Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want.
105+
1. Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want.
106106

107107
```json
108108
{

articles/search/includes/how-tos/knowledge-source-delete-csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ To delete a knowledge source:
8181
}
8282
```
8383

84-
1. Either delete the knowledge base or [update the knowledge base](/dotnet/api/azure.search.documents.indexes.searchindexclient.createorupdateknowledgebaseasync?view=azure-dotnet-preview) to remove the knowledge source if you have multiple sources. This example shows deletion.
84+
1. Either delete the knowledge base or [update the knowledge base](/dotnet/api/azure.search.documents.indexes.searchindexclient.createorupdateknowledgebaseasync?view=azure-dotnet-preview&preserve-view=true) to remove the knowledge source if you have multiple sources. This example shows deletion.
8585

8686
```csharp
8787
using Azure.Search.Documents.Indexes;

0 commit comments

Comments
 (0)