Skip to content

Commit 831d7a7

Browse files
authored
Merge pull request #5275 from MicrosoftDocs/main
5/29/2025 AM Publish
2 parents cfe2517 + 561257d commit 831d7a7

36 files changed

+164
-137
lines changed

articles/ai-services/agents/quotas-limits.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: aahill
77
ms.author: aahi
88
ms.service: azure-ai-agent-service
99
ms.topic: conceptual
10-
ms.date: 04/25/2025
10+
ms.date: 05/29/2025
1111
ms.custom: azure-ai-agents
1212
---
1313

@@ -23,7 +23,7 @@ The following sections provide you with a guide to the default quotas and limits
2323
|--|--|
2424
| Max files per agent/thread | 10,000 |
2525
| Max file size for agents & fine-tuning | 512 MB |
26-
| Max size for all uploaded files for agents |100 GB |
26+
| Max size for all uploaded files for agents |200 GB |
2727
| agents token limit | 2,000,000 token limit |
2828

2929
The 2,000,000 agent limit refers to the maximum number of distinct Agent resources that can be created within a single Azure subscription per region. It does not apply to threads or token usage.

articles/ai-services/openai/quotas-limits.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- ignite-2023
99
- references_regions
1010
ms.topic: conceptual
11-
ms.date: 04/23/2025
11+
ms.date: 05/29/2025
1212
ms.author: mbullwin
1313
---
1414

@@ -43,9 +43,9 @@ The following sections provide you with a quick guide to the default quotas and
4343
| Max number of `/chat/completions` functions | 128 |
4444
| Max number of `/chat completions` tools | 128 |
4545
| Maximum number of Provisioned throughput units per deployment | 100,000 |
46-
| Max files per Assistant/thread | 10,000 when using the API or [Azure AI Foundry portal](https://ai.azure.com/). In Azure OpenAI Studio the limit was 20.|
46+
| Max files per Assistant/thread | 10,000 when using the API or [Azure AI Foundry portal](https://ai.azure.com/).|
4747
| Max file size for Assistants & fine-tuning | 512 MB<br/><br/>200 MB via [Azure AI Foundry portal](https://ai.azure.com/) |
48-
| Max size for all uploaded files for Assistants |100 GB |
48+
| Max size for all uploaded files for Assistants |200 GB |
4949
| Assistants token limit | 2,000,000 token limit |
5050
| GPT-4o max images per request (# of images in the messages array/conversation history) | 50 |
5151
| GPT-4 `vision-preview` & GPT-4 `turbo-2024-04-09` default max tokens | 16 <br><br> Increase the `max_tokens` parameter value to avoid truncated responses. GPT-4o max tokens defaults to 4096. |

articles/search/.openpublishing.redirection.search.json

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -395,6 +395,26 @@
395395
"source_path_from_root": "/articles/search/search-data-sources-terms-of-use.md",
396396
"redirect_url": "https://partner.microsoft.com/partnership/find-a-partner",
397397
"redirect_document_id": false
398+
},
399+
{
400+
"source_path_from_root": "/articles/search/tutorial-multimodal-indexing-with-embedding-and-doc-extraction.md",
401+
"redirect_url": "/azure/search/tutorial-document-extraction-multimodal-embeddings",
402+
"redirect_document_id": true
403+
},
404+
{
405+
"source_path_from_root": "/articles/search/tutorial-multimodal-indexing-with-image-verbalization-and-doc-extraction.md",
406+
"redirect_url": "/azure/search/tutorial-document-extraction-image-verbalization",
407+
"redirect_document_id": true
408+
},
409+
{
410+
"source_path_from_root": "/articles/search/tutorial-multimodal-index-embeddings-skill.md",
411+
"redirect_url": "/azure/search/tutorial-document-layout-multimodal-embeddings",
412+
"redirect_document_id": true
413+
},
414+
{
415+
"source_path_from_root": "/articles/search/tutorial-multimodal-index-image-verbalization-skill.md",
416+
"redirect_url": "/azure/search/tutorial-document-layout-image-verbalization",
417+
"redirect_document_id": true
398418
}
399419
]
400420
}

articles/search/cognitive-search-concept-annotations-syntax.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: heidist
77
ms.service: azure-ai-search
88
ms.custom:
99
- ignite-2023
10-
ms.topic: how-to
10+
ms.topic: reference
1111
ms.date: 05/27/2025
1212
---
1313

articles/search/cognitive-search-skill-azure-openai-embedding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Parameters are case-sensitive.
4141

4242
| Inputs | Description |
4343
|---------------------|-------------|
44-
| `resourceUri` | The URI of the model provider, in this case, an Azure OpenAI resource. This parameter only supports URLs with domain `openai.azure.com`, such as `https://<resourcename>.openai.azure.com`. If the Azure OpenAI endpoint has a URL with domain `cognitiveservices.azure.com`, like `https://<resourcename>.cognitiveservices.azure.com`, a [custom subdomain](/azure/ai-services/openai/how-to/use-your-data-securely#enabled-custom-subdomain) with `openai.azure.com` must be created first for the Azure OpenAI resource and use `https://<resourcename>.openai.azure.com` instead. |
44+
| `resourceUri` | The URI of the model provider, in this case, an Azure OpenAI resource. This parameter only supports URLs with domain `openai.azure.com`, such as `https://<resourcename>.openai.azure.com`. If the Azure OpenAI endpoint has a URL with domain `cognitiveservices.azure.com`, like `https://<resourcename>.cognitiveservices.azure.com`, a [custom subdomain](/azure/ai-services/openai/how-to/use-your-data-securely#enabled-custom-subdomain) with `openai.azure.com` must be created first for the Azure OpenAI resource and use `https://<resourcename>.openai.azure.com` instead. This field is required if your Azure OpenAI resource is deployed behind a Private Endpoint or uses Virtual Network (VNet) integration. |
4545
| `apiKey` | The secret key used to access the model. If you provide a key, leave `authIdentity` empty. If you set both the `apiKey` and `authIdentity`, the `apiKey` is used on the connection. |
4646
| `deploymentId` | The name of the deployed Azure OpenAI embedding model. The model should be an embedding model, such as text-embedding-ada-002. See the [List of Azure OpenAI models](/azure/ai-services/openai/concepts/models) for supported models.|
4747
| `authIdentity` | A user-managed identity used by the search service for connecting to Azure OpenAI. You can use either a [system or user managed identity](search-howto-managed-identities-data-sources.md). To use a system managed identity, leave `apiKey` and `authIdentity` blank. The system-managed identity is used automatically. A managed identity must have [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles) permissions to send text to Azure OpenAI. |

articles/search/cognitive-search-skill-document-extraction.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ The **Document Extraction** skill extracts content from a file within the enrich
1919

2020
For [vector](vector-search-overview.md) and [multimodal search](multimodal-search-overview.md), Document Extraction combined with the [Text Split skill](cognitive-search-skill-textsplit.md) is more affordable than other [data chunking approaches](vector-search-how-to-chunk-documents.md). The following tutorials demonstrate skill usage for different scenarios:
2121

22-
+ [Tutorial: Index mixed content using multimodal embeddings and the Document Extraction skill](tutorial-multimodal-indexing-with-embedding-and-doc-extraction.md)
22+
+ [Tutorial: Index mixed content using multimodal embeddings and the Document Extraction skill](tutorial-document-extraction-multimodal-embeddings.md)
2323

24-
+ [Tutorial: Index mixed content using image verbalizations and the Document Extraction skill](tutorial-multimodal-indexing-with-image-verbalization-and-doc-extraction.md)
24+
+ [Tutorial: Index mixed content using image verbalizations and the Document Extraction skill](tutorial-document-extraction-image-verbalization.md)
2525

2626
> [!NOTE]
2727
> This skill isn't bound to Azure AI services and has no Azure AI services key requirement.

articles/search/cognitive-search-skill-document-intelligence-layout.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,9 @@ This article is the reference documentation for the Document Layout skill. For u
2424

2525
It's common to use this skill on content such as PDFs that have structure and images. The following tutorials demonstrate several scenarios:
2626

27-
+ [Tutorial: Index mixed content using image verbalizations and the Document Layout skill](tutorial-multimodal-index-image-verbalization-skill.md)
27+
+ [Tutorial: Index mixed content using image verbalizations and the Document Layout skill](tutorial-document-layout-image-verbalization.md)
2828

29-
+ [Tutorial: Index mixed content using multimodal embeddings and the Document Layout skill](tutorial-multimodal-index-embeddings-skill.md)
29+
+ [Tutorial: Index mixed content using multimodal embeddings and the Document Layout skill](tutorial-document-layout-multimodal-embeddings.md)
3030

3131
> [!NOTE]
3232
> This skill uses the [Document Intelligence layout model](/azure/ai-services/document-intelligence/concept-layout) provided in [Azure AI Document Intelligence](/azure/ai-services/document-intelligence/overview).

articles/search/cognitive-search-skill-genai-prompt.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ The **GenAI (Generative AI) Prompt** skill executes a *chat completion* request
1919

2020
Use this capability to create new information that can be indexed and stored as searchable content. Examples include verbalize images, summarize larger passages, simplify complex content, or any other task that an LLM can perform. The skill supports text, image, and multimodal content such as a PDF that contains text and images. It's common to use this skill combined with a data chunking skill. The following tutorials demonstrate the image verbalization scenarios with two different data chunking techniques:
2121

22-
- [Tutorial: Index mixed content using image verbalizations and the Document Layout skill](tutorial-multimodal-index-image-verbalization-skill.md)
22+
- [Tutorial: Index mixed content using image verbalizations and the Document Layout skill](tutorial-document-layout-image-verbalization.md)
2323

24-
- [Tutorial: Index mixed content using image verbalizations and the Document Extraction skill](tutorial-multimodal-indexing-with-image-verbalization-and-doc-extraction.md)
24+
- [Tutorial: Index mixed content using image verbalizations and the Document Extraction skill](tutorial-document-extraction-image-verbalization.md)
2525

2626
The GenAI Prompt skill is available in the [2025-05-01-preview REST API](/rest/api/searchservice/skillsets/create?view=rest-searchservice-2025-05-01-preview&preserve-view=true) only.
2727

articles/search/multimodal-search-overview.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
---
2-
title: Multimodal search concepts and guidance
2+
title: Multimodal Search Concepts and Guidance
33
titleSuffix: Azure AI Search
44
description: Learn what multimodal search is, how Azure AI Search supports it for text and image content, and where to find detailed concepts, tutorials, and samples.
55
ms.service: azure-ai-search
66
ms.topic: conceptual
7-
ms.date: 05/28/2025
7+
ms.date: 05/29/2025
88
author: gmndrg
99
ms.author: gimondra
1010
---
1111

1212
# Multimodal search in Azure AI Search
1313

14-
Multimodal search refers to the ability to ingest, understand, and retrieve content across multiple data types, including text, images, video, and audio. In Azure AI Search, multimodal search natively supports the ingestion of documents containing text and images and the retrieval of their content, enabling you to perform searches that combine both modalities.
14+
Multimodal search refers to the ability to ingest, understand, and retrieve information across multiple content types, including text, images, video, and audio. In Azure AI Search, multimodal search natively supports the ingestion of documents containing text and images and the retrieval of their content, enabling you to perform searches that combine both modalities.
1515

1616
Building a robust multimodal pipeline typically involves:
1717

@@ -115,8 +115,8 @@ To help you get started with multimodal search in Azure AI Search, here's a coll
115115
| Content | Description |
116116
|--|--|
117117
| [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md) | Create and test a multimodal index in the Azure portal using the wizard and Search Explorer. |
118-
| [Tutorial: Image verbalization and Document Extraction skill](tutorial-multimodal-indexing-with-image-verbalization-and-doc-extraction.md) | Extract text and images, verbalize diagrams, and embed the resulting descriptions and text into a searchable index. |
119-
| [Tutorial: Multimodal embeddings and Document Extraction skill](tutorial-multimodal-indexing-with-embedding-and-doc-extraction.md) | Use a vision-text model to embed both text and images directly, enabling visual-similarity search over scanned PDFs. |
120-
| [Tutorial: Image verbalization and Document Layout skill](tutorial-multimodal-index-image-verbalization-skill.md) | Apply layout-aware chunking and diagram verbalization, capture location metadata, and store cropped images for precise citations and page highlights. |
121-
| [Tutorial: Multimodal embeddings and Document Layout skill](tutorial-multimodal-index-embeddings-skill.md) | Combine layout-aware chunking with unified embeddings for hybrid semantic and keyword search that returns exact hit locations. |
118+
| [Tutorial: Image verbalization and Document Extraction skill](tutorial-document-extraction-image-verbalization.md) | Extract text and images, verbalize diagrams, and embed the resulting descriptions and text into a searchable index. |
119+
| [Tutorial: Multimodal embeddings and Document Extraction skill](tutorial-document-extraction-multimodal-embeddings.md) | Use a vision-text model to embed both text and images directly, enabling visual-similarity search over scanned PDFs. |
120+
| [Tutorial: Image verbalization and Document Layout skill](tutorial-document-layout-image-verbalization.md) | Apply layout-aware chunking and diagram verbalization, capture location metadata, and store cropped images for precise citations and page highlights. |
121+
| [Tutorial: Multimodal embeddings and Document Layout skill](tutorial-document-layout-multimodal-embeddings.md) | Combine layout-aware chunking with unified embeddings for hybrid semantic and keyword search that returns exact hit locations. |
122122
| [Sample app: Multimodal RAG GitHub repository](https://aka.ms/azs-multimodal-sample-app-repo) | An end-to-end, code-ready RAG application with multimodal capabilities that surfaces both text snippets and image annotations. Ideal for jump-starting enterprise copilots. |

articles/search/query-lucene-syntax.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.author: beloh
99
ms.service: azure-ai-search
1010
ms.custom:
1111
- ignite-2023
12-
ms.topic: concept-article
12+
ms.topic: reference
1313
ms.date: 12/11/2024
1414
---
1515

0 commit comments

Comments
 (0)