You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The **Document Layout** skill analyzes a document to detect structure and characteristics, and produces a syntactical representation of the document in Markdown or Text format. You can use it to extract text and images, where image extraction includes location metadata that preserves image position within the document. Image proximity to related content is beneficial in Retrieval Augmented Generation (RAG) workloads and [multimodal search](multimodal-search-overview.md) scenarios.
21
19
22
20
This article is the reference documentation for the Document Layout skill. For usage information, see [How to chunk and vectorize by document layout](search-how-to-semantic-chunking.md).
@@ -37,11 +35,12 @@ This skill is bound to a [billable Azure AI multi-service resource](cognitive-se
37
35
This skill has the following limitations:
38
36
39
37
+ The skill isn't suitable for large documents requiring more than 5 minutes of processing in the AI Document Intelligence layout model. The skill times out, but charges still apply to the AI Services multi-services resource if it attaches to the skillset for billing purposes. Ensure documents are optimized to stay within processing limits to avoid unnecessary costs.
38
+
40
39
+ Since this skill calls the Azure AI Document Intelligence layout model, all documented [service behaviors for different document types](/azure/ai-services/document-intelligence/prebuilt/layout#pages) for different file types apply to its output. For example, Word (DOCX) and PDF files may produce different results due to differences in how images are handled. If consistent image behavior across DOCX and PDF is required, consider converting documents to PDF or reviewing the [multimodal search documentation](multimodal-search-overview.md) for alternative approaches.
41
40
42
41
## Supported regions
43
42
44
-
The Document Layout skill calls the [Document Intelligence Public preview version 2024-07-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-v4.0%20(2024-07-31-preview)&preserve-view=true).
43
+
The Document Layout skill calls the [Document Intelligence 2024-11-30 API](/rest/api/aiservices/operation-groups).
45
44
46
45
Supported regions vary by modality and how the skill connects to the Document Intelligence layout model.
47
46
@@ -70,12 +69,6 @@ This skill recognizes the following file formats.
70
69
71
70
Refer to [Azure AI Document Intelligence layout model supported languages](/azure/ai-services/document-intelligence/language-support/ocr?view=doc-intel-3.1.0&tabs=read-print%2Clayout-print%2Cgeneral#layout&preserve-view=true) for printed text.
72
71
73
-
## Supported parameters
74
-
75
-
Several parameters are version-specific. The skills parameter table notes the API version in which a parameter was introduced so that you know how to configure the skill. To use version-specific features such as image and location metadata extraction in [2025-05-01-preview REST API](/rest/api/searchservice/skillsets/create?view=rest-searchservice-2025-05-01-preview&preserve-view=true), you can use the Azure portal, or target 2025-05-01-preview, or check an Azure SDK change log to see if it supports the new parameters.
76
-
77
-
The Azure portal supports most preview features and can be used to create or update a skillset. For updates to the Document Layout skill, edit the skillset JSON definition to add new preview parameters.
Parameters are case-sensitive. Several parameters were introduced in specific preview versions of the REST API. We recommend using the generally available version (2025-09-01) or the latest preview (2025-08-01-preview) for full access to all parameters.
93
86
94
-
| Parameter name |Version |Allowed Values | Description |
|`outputMode`|[2024-11-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-11-01-preview&preserve-view=true)|`oneToMany`| Controls the cardinality of the output produced by the skill. |
97
-
|`markdownHeaderDepth`|[2024-11-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-11-01-preview&preserve-view=true)|`h1`, `h2`, `h3`, `h4`, `h5`, `h6(default)`| Only applies if `outputFormat` is set to `markdown`. This parameter describes the deepest nesting level that should be considered. For instance, if the markdownHeaderDepth is `h3`, any sections that are deeper such as `h4`, are rolled into `h3`. |
98
-
|`outputFormat`|[2025-05-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true)|`markdown(default)`, `text`|**New**. Controls the format of the output generated by the skill. |
99
-
|`extractionOptions`|[2025-05-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true)|`["images"]`, `["images", "locationMetadata"]`, `["locationMetadata"]`|**New**. Identify any extra content extracted from the document. Define an array of enums that correspond to the content to be included in the output. For instance, if the `extractionOptions` is `["images", "locationMetadata"]`, the output includes images and location metadata which provides page location information related to where the content was extracted, such as a page number or section. This parameter applies to both output formats. |
100
-
|`chunkingProperties`|[2025-05-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true)| See below. |**New**. Only applies if `outputFormat` is set to `text`. Options that encapsulate how to chunk text content while recomputing other metadata. |
|`outputMode`|`oneToMany`| Controls the cardinality of the output produced by the skill. |
90
+
|`markdownHeaderDepth`|`h1`, `h2`, `h3`, `h4`, `h5`, `h6(default)`| Only applies if `outputFormat` is set to `markdown`. This parameter describes the deepest nesting level that should be considered. For instance, if the markdownHeaderDepth is `h3`, any sections that are deeper such as `h4`, are rolled into `h3`. |
91
+
|`outputFormat`|`markdown(default)`, `text`|**New**. Controls the format of the output generated by the skill. |
92
+
|`extractionOptions`|`["images"]`, `["images", "locationMetadata"]`, `["locationMetadata"]`|**New**. Identify any extra content extracted from the document. Define an array of enums that correspond to the content to be included in the output. For instance, if the `extractionOptions` is `["images", "locationMetadata"]`, the output includes images and location metadata which provides page location information related to where the content was extracted, such as a page number or section. This parameter applies to both output formats. |
93
+
|`chunkingProperties`| See below. |**New**. Only applies if `outputFormat` is set to `text`. Options that encapsulate how to chunk text content while recomputing other metadata. |
|`unit`|[2025-05-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true)|`Characters`. currently the only allowed value. Chunk length is measured in characters, as opposed to words or tokens |**New**. Controls the cardinality of the chunk unit. |
105
-
|`maximumLength`|[2025-05-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true)|Any integer between 300-50000 |**New**. The maximum chunk length in characters as measured by String.Length. |
106
-
|`overlapLength`|[2025-05-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true)|Integer. The value needs to be less than the half of the `maximumLength`|**New**. The length of overlap provided between two text chunks. |
97
+
|`unit`|`Characters`. currently the only allowed value. Chunk length is measured in characters, as opposed to words or tokens |**New**. Controls the cardinality of the chunk unit. |
98
+
|`maximumLength`| Any integer between 300-50000 |**New**. The maximum chunk length in characters as measured by String.Length. |
99
+
|`overlapLength`| Integer. The value needs to be less than the half of the `maximumLength`|**New**. The length of overlap provided between two text chunks. |
107
100
108
101
## Skill inputs
109
102
@@ -203,7 +196,7 @@ The value of the `markdownHeaderDepth` controls the number of keys in the "secti
203
196
204
197
## Example for text output mode and image and metadata extraction
205
198
206
-
This example demonstrates how to use the new parameters introduced in the **2025-05-01-preview** to output text content in fixed-sized chunks and extract images along with location metadata from the document.
199
+
This example demonstrates how to output text content in fixed-sized chunks and extract images along with location metadata from the document.
207
200
208
201
### Sample definition for text output mode and image and metadata extraction
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-skill-genai-prompt.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ The GenAI Prompt skill is available in the [latest preview REST API](/rest/api/s
39
39
40
40
- For image verbalization, the model you use to analyze the image determines what image formats are supported.
41
41
42
-
- For GPT-5 model, the `temperature` parameter is not supported in the same way as previous models. If defined, it must be set to `1.0`, as other values will result in errors.
42
+
- For GPT-5 models, the `temperature` parameter is not supported in the same way as previous models. If defined, it must be set to `1.0`, as other values will result in errors.
43
43
44
44
- Billing is based on the pricing of the model you use.
Copy file name to clipboardExpand all lines: articles/search/hybrid-search-ranking.md
+8-13Lines changed: 8 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.service: azure-ai-search
8
8
ms.custom:
9
9
- ignite-2023
10
10
ms.topic: conceptual
11
-
ms.date: 08/27/2025
11
+
ms.date: 09/28/2025
12
12
---
13
13
14
14
# Relevance scoring in hybrid search using Reciprocal Rank Fusion (RRF)
@@ -17,9 +17,6 @@ Reciprocal Rank Fusion (RRF) is an algorithm that evaluates the search scores fr
17
17
18
18
RRF is based on the concept of *reciprocal rank*, which is the inverse of the rank of the first relevant document in a list of search results. The goal of the technique is to take into account the position of the items in the original rankings, and give higher importance to items that are ranked higher in multiple lists. This can help improve the overall quality and reliability of the final ranking, making it more useful for the task of fusing multiple ordered search results.
19
19
20
-
> [!NOTE]
21
-
> The [latest preview REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-08-01-preview&preserve-view=true) can deconstruct an RRF-ranked search score into its component subscores. This gives you transparency into all-up score composition. For more information, see [Unpack search scores (preview)](#unpack-a-search-score-into-subscores-preview) in this article.
22
-
23
20
## How RRF ranking works
24
21
25
22
RRF works by taking the search results from multiple methods, assigning a reciprocal rank score to each document in the results, and then combining the scores to create a new ranking. The concept is that documents appearing in the top positions across multiple search methods are likely to be more relevant and should be ranked higher in the combined result.
@@ -59,22 +56,20 @@ The following chart identifies the scoring property returned on each match, algo
59
56
60
57
Semantic ranking occurs after RRF merging of results. Its score (`@search.rerankerScore`) is always reported separately in the query response. Semantic ranker can rerank full text and hybrid search results, assuming those results include fields having semantically rich content. It can rerank pure vector queries if the search documents include text fields that contain semantically relevant content.
61
58
62
-
## Unpack a search score into subscores (preview)
63
-
64
-
Using the [latest preview REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-08-01-preview&preserve-view=true), you can deconstruct a search score to view its subscores.
59
+
## Unpack a search score into subscores
65
60
66
-
For vector queries, this information can help you determine an appropriate value for [vector weighting](vector-search-how-to-query.md#vector-weighting) or [setting minimum thresholds](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview).
61
+
You can deconstruct a search score to view its subscores. For vector queries, this information can help you determine an appropriate value for [vector weighting](vector-search-how-to-query.md#vector-weighting) or [setting minimum thresholds](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview).
67
62
68
63
To get subscores:
69
64
70
-
+ Use the [latest preview Search Documents REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-08-01-preview&preserve-view=true#request-body) or an Azure SDK beta package that provides the feature.
65
+
+ Use the [Search Documents REST API](/rest/api/searchservice/documents/search-post#request-body) or an Azure SDK package that provides the feature.
71
66
72
67
+ Modify a query request, adding a new `debug` parameter set to either `vector`, `semantic` if using semantic ranker, or `all`.
73
68
74
69
Here's an example of hybrid query that returns subscores in debug mode:
75
70
76
71
```http
77
-
POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2025-08-01-preview
72
+
POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2025-09-01
78
73
79
74
{
80
75
"vectorQueries": [
@@ -114,7 +109,7 @@ POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/d
114
109
115
110
## Weighted scores
116
111
117
-
Using the [stable REST API version](/rest/api/searchservice/documents/search-post) and newer preview API versions, you can [weight vector queries](vector-search-how-to-query.md#vector-weighting) to increase or decrease their importance in a hybrid query.
112
+
You can also[weight vector queries](vector-search-how-to-query.md#vector-weighting) to increase or decrease their importance in a hybrid query.
118
113
119
114
Recall that when computing RRF for a certain document, the search engine looks at the rank of that document for each result set where it shows up. Assume a document shows up in three separate search results, where the results are from two vector queries and one text BM25-ranked query. The position of the document varies in each result.
120
115
@@ -142,5 +137,5 @@ For more information, see [How to work with search results](search-pagination-pa
142
137
143
138
## See also
144
139
145
-
+[Learn more about hybrid search](hybrid-search-overview.md)
146
-
+[Learn more about vector search](vector-search-overview.md)
Copy file name to clipboardExpand all lines: articles/search/search-agentic-retrieval-how-to-index.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,6 +47,7 @@ Here's an example index that works for agentic retrieval. It meets the criteria
47
47
```json
48
48
{
49
49
"name": "earth_at_night",
50
+
"description": "Contains images an descriptions of our planet in darkness as captured from space by Earth-observing satellites and astronauts on the International Space Station over the past 25 years.",
50
51
"fields": [
51
52
{
52
53
"name": "id", "type": "Edm.String",
@@ -166,6 +167,15 @@ All `searchable` fields are included in query execution. There's no support for
166
167
> + Fields selected in the response string are semantic fields (title, terms, content)
167
168
> + Fields in reference source data are all `retrievable` fields, assuming reference source data is true -->
168
169
170
+
## Add a description
171
+
172
+
An index `description` field is exposed programmatically, which means you can pass this description to LLMs and Model Context Protocol (MCP) servers as an input when deciding to use a specific index for a query. This human-readable text is invaluable when a system must access several indexes and make a decision based on the description.
173
+
174
+
An index description is a schema update, and you can add it without having to rebuild the entire index.
175
+
176
+
+ String length is 4,000 characters maximum.
177
+
+ Content must be human-readable, in Unicode. Your use case should determine which language to use (for example, English or another language).
178
+
169
179
## Add a semantic configuration
170
180
171
181
The index must have at least one semantic configuration. The semantic configuration must have:
Copy file name to clipboardExpand all lines: articles/search/search-api-migration.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.custom:
11
11
- build-2024
12
12
- ignite-2024
13
13
ms.topic: conceptual
14
-
ms.date: 08/27/2025
14
+
ms.date: 09/27/2025
15
15
---
16
16
17
17
# Upgrade to the latest REST API in Azure AI Search
@@ -22,7 +22,7 @@ Here are the most recent versions of the REST APIs:
22
22
23
23
| Targeted operations | REST API | Status |
24
24
|---------------------|----------|--------|
25
-
| Data plane |[`2024-07-01`](/rest/api/searchservice/search-service-api-versions#2024-07-01)| Stable |
25
+
| Data plane |[`2025-09-01`](/rest/api/searchservice/search-service-api-versions#2025-09-01)| Stable |
26
26
| Data plane |[`2025-08-01-preview`](/rest/api/searchservice/search-service-api-versions#2025-08-01-preview&preserve-view=true)| Preview |
27
27
| Control plane |[`2025-05-01`](/rest/api/searchmanagement/operation-groups?view=rest-searchmanagement-2025-05-01&preserve-view=true)| Stable |
28
28
| Control plane |[`2025-02-01-preview`](/rest/api/searchmanagement/operation-groups?view=rest-searchmanagement-2025-02-01-preview&preserve-view=true)| Preview |
@@ -90,6 +90,12 @@ See [Migrate from preview version](semantic-code-migration.md) to transition you
90
90
91
91
Upgrade guidance assumes upgrade from the most recent previous version. If your code is based on an old API version, we recommend upgrading through each successive version to get to the newest version.
92
92
93
+
### Upgrade to 2025-09-01
94
+
95
+
[`2025-09-01`](/rest/api/searchservice/search-service-api-versions#2025-09-01) is the latest stable REST API version and it adds general availability for the OneLake indexer, Document Layout skill, and other APIs.
96
+
97
+
There are no breaking changes if you're upgrading from `2024-07-01` and not using any preview features. To use the new stable release, change the API version and test your code.
98
+
93
99
### Upgrade to 2025-08-01-preview
94
100
95
101
[`2025-08-01-preview`](/rest/api/searchservice/search-service-api-versions#2025-08-01-preview) introduces the following breaking changes to knowledge agents created using `2025-05-01-preview`:
0 commit comments