You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> This quickstart is for the previous version of agents. See the [**quickstart for Microsoft Foundry**](../quickstarts/get-started-code.md?view=foundry&preserve-view=true) to use the new version of the API.
22
+
20
23
Foundry Agent Service allows you to create AI agents tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions.
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/evaluation-evaluators/general-purpose-evaluators.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -315,7 +315,7 @@ if __name__ == "__main__":
315
315
main()
316
316
```
317
317
318
-
For more details, see the [complete working sample.](https://github.com/Azure/azure-sdk-for-python/blob/evaluation_samples_graders/sdk/ai/azure-ai-projects/samples/evaluation/agentic_evaluators/sample_coherence.py)
318
+
For more details, see the [complete working sample](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/samples/evaluations/agentic_evaluators/sample_coherence.py).
## Optimize agent instructions for knowledge base retrieval
145
+
146
+
To maximize the accuracy of knowledge base invocations and ensure proper citation formatting, use optimized agent instructions. Based on our experiments, we recommend the following instruction template as a starting point:
147
+
148
+
```plaintext
149
+
You are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.
150
+
Every answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`
151
+
If you cannot find the answer in the provided knowledge base you must respond with "I don't know".
152
+
"""
153
+
```
154
+
155
+
This instruction template optimizes for:
156
+
157
+
+**Higher MCP tool invocation rates**: Explicit directives ensure the agent consistently calls the knowledge base tool rather than relying on its training data.
158
+
+**Proper annotation formatting**: The specified citation format ensures the agent includes provenance information in responses, making it clear which knowledge sources were used.
159
+
160
+
> [!TIP]
161
+
> While this template provides a strong foundation, we recommend evaluating and iterating on the instructions based on your specific use case and the tasks you're trying to accomplish. Test different variations to find what works best for your scenarios.
162
+
144
163
## Create an agent with the MCP tool
145
164
146
165
The next step is to create an agent that integrates the knowledge base as an MCP tool. The agent uses a system prompt to instruct when and how to call the knowledge base, follows instructions on how to answer questions, and automatically maintains its tool configuration and settings across conversation sessions.
@@ -168,11 +187,11 @@ agent_model = "{deployed_LLM}" # e.g. gpt-4.1-mini
# Define agent instructions (see "Optimize agent instructions" section for guidance)
172
191
instructions ="""
173
-
A Q&A agent that can answer questions based on the attached knowledge base.
174
-
Always provide references to the ID of the data source used to answer the question.
175
-
If you don't have the answer, respond with "I don't know".
192
+
You are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.
193
+
Every answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`
194
+
If you cannot find the answer in the provided knowledge base you must respond with "I don't know".
"instructions": "\nA Q&A agent that can answer questions based on the attached knowledge base.\nAlways provide references to the ID of the data source used to answer the question.\nIf you don't have the answer, respond with \"I don't know\".\n",
237
+
"instructions": "\nYou are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.\nEvery answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`\nIf you cannot find the answer in the provided knowledge base you must respond with \"I don't know\".\n",
Copy file name to clipboardExpand all lines: articles/ai-foundry/default/includes/agent-v2-switch.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,4 +11,4 @@ ms.custom: include
11
11
---
12
12
13
13
> [!TIP]
14
-
> Code uses **Agents v2 API (preview)** and is incompatible with Agents v1. [Switch to (classic)](?view=foundry-classic&preserve-view=true) for the Agents v1 API version.
14
+
> Code uses **Agents v2 API (preview)** and is incompatible with Agents v1. [Switch to Foundry (classic) documentation](?view=foundry-classic&preserve-view=true) for the Agents v1 API version.
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/agent-v1-switch.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,4 +11,4 @@ ms.custom: include
11
11
---
12
12
13
13
> [!TIP]
14
-
> Code uses **Agents v1 API** and is incompatible with Agents v2 (preview). [Switch to (new)](?view=foundry&preserve-view=true) for the Agents v2 API (preview) version.
14
+
> Code uses **Agents v1 API** and is incompatible with Agents v2 (preview). [Switch to Foundry (new) documentation](?view=foundry&preserve-view=true) for the Agents v2 API (preview) version.
> - The MCP client within the Responses API requires TLS 1.2 or greater.
1062
+
> - mutual TLS (mTLS) is currently not supported.
1063
+
> -[Azure service tags](/azure/virtual-network/service-tags-overview) are currently not supported for MCP client traffic.
1064
+
1060
1065
Unlike the GitHub MCP server, most remote MCP servers require authentication. The MCP tool in the Responses API supports custom headers, allowing you to securely connect to these servers using the authentication scheme they require.
1061
1066
1062
1067
You can specify headers such as API keys, OAuth access tokens, or other credentials directly in your request. The most commonly used header is the `Authorization` header.
+[End-to-end with Azure AI Search and Azure AI Agent Service](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example)
134
+
+[End-to-end with Azure AI Search and Foundry Agent Service](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example)
135
135
136
136
### [**REST API references**](#tab/rest-api-references)
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-concept-image-scenarios.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: HeidiSteen
6
6
ms.author: heidist
7
7
ms.service: azure-ai-search
8
8
ms.topic: how-to
9
-
ms.date: 05/01/2025
9
+
ms.date: 11/21/2025
10
10
ms.update-cycle: 180-days
11
11
ms.custom:
12
12
- devx-track-csharp
@@ -15,16 +15,16 @@ ms.custom:
15
15
16
16
# Extract text and information from images by using AI enrichment
17
17
18
-
Images often contain useful information that's relevant in search scenarios. You can [vectorize images](search-get-started-portal-image-search.md) to represent visual content in your search index. Or, you can use [AI enrichment and skillsets](cognitive-search-concept-intro.md) to create and extract searchable*text* from images, including:
18
+
Images often contain useful information that's relevant in search scenarios. Azure AI Search doesn't query image content in real time, but you can extract information about an image during indexing and make that content searchable. To represent images in a search index, you can use these approaches:
19
19
20
-
+[GenAI Prompt](cognitive-search-skill-genai-prompt.md) to pass a prompt to a chat completion skill, requesting a description of image content.
21
-
+[OCR](cognitive-search-skill-ocr.md)for optical character recognition of text and digits
22
-
+[Image Analysis](cognitive-search-skill-image-analysis.md)that describes images through visual features
23
-
+[Custom skills](#passing-images-to-custom-skills) to invoke any external image processing that you want to provide
20
+
+[Vectorize images](search-get-started-portal-image-search.md) to represent visual content as a searchable vector.
21
+
+[Verbalize images](cognitive-search-skill-genai-prompt.md)using the GenAI Prompt skill that sends a verbalization request to a chat completion model to describe the image.
22
+
+[Analyze images](cognitive-search-skill-image-analysis.md)using an image analysis skill to generate a text representation of an image, such as *dandelion* for a photo of a dandelion, or the color *yellow*. You can also extract metadata about the image, such as its size.
23
+
+[Use OCR](cognitive-search-skill-ocr.md) to extract text and from photos or pictures, such as the word *STOP* in a stop sign.
24
24
25
-
By using OCR, you can extract text and from photos or pictures, such as the word *STOP* in a stop sign. Through image analysis, you can generate a text representation of an image, such as *dandelion* for a photo of a dandelion, or the color *yellow*. You can also extract metadata about the image, such as its size.
25
+
You can also create a [custom skill](#passing-images-to-custom-skills) to invoke any external image processing that you want to provide.
26
26
27
-
This article covers the fundamentals of working with images in skillsets, and also describes several common scenarios, such as working with embedded images, custom skills, and overlaying visualizations on original images.
27
+
This article focuses on image analysis and OCR, custom skills that provide external processing, working with embedded images, and overlaying visualizations on original images. If verbalization or vectorization is your preferred approach, see [Multimodal search](multimodal-search-overview.md) instead.
28
28
29
29
To work with image content in a skillset, you need:
30
30
@@ -102,7 +102,7 @@ Metadata adjustments are captured in a complex type created for each image. You
102
102
103
103
The default of 2,000 pixels for the normalized images maximum width and height is based on the maximum sizes supported by the [OCR skill](cognitive-search-skill-ocr.md) and the [image analysis skill](cognitive-search-skill-image-analysis.md). The [OCR skill](cognitive-search-skill-ocr.md) supports a maximum width and height of 4,200 for non-English languages, and 10,000 for English. If you increase the maximum limits, processing could fail on larger images depending on your skillset definition and the language of the documents.
104
104
105
-
1. Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want.
105
+
1. Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want.
Copy file name to clipboardExpand all lines: articles/search/includes/how-tos/knowledge-source-delete-csharp.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,7 +81,7 @@ To delete a knowledge source:
81
81
}
82
82
```
83
83
84
-
1. Either delete the knowledge base or [update the knowledge base](/dotnet/api/azure.search.documents.indexes.searchindexclient.createorupdateknowledgebaseasync?view=azure-dotnet-preview) to remove the knowledge source if you have multiple sources. This example shows deletion.
84
+
1. Either delete the knowledge base or [update the knowledge base](/dotnet/api/azure.search.documents.indexes.searchindexclient.createorupdateknowledgebaseasync?view=azure-dotnet-preview&preserve-view=true) to remove the knowledge source if you have multiple sources. This example shows deletion.
0 commit comments