Skip to content

Commit 74ea1e4

Browse files
Merge pull request #4736 from HeidiSteen/heidist-rb-rag
Preview slug, plus GenAI prompt name consistency
2 parents 00f0263 + dbfca6a commit 74ea1e4

9 files changed

+14
-8
lines changed

articles/search/chat-completion-skill-example-usage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,6 +284,6 @@ POST /indexes/[index name]/docs/search?api-version=[api-version]
284284

285285
## Related content
286286
+ [Create indexer (REST)](/rest/api/searchservice/indexers/create)
287-
+ [Gen AI Prompt Skill](cognitive-search-skill-genai-prompt.md)
287+
+ [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md)
288288
+ [How to create a skillset](cognitive-search-defining-skillset.md)
289289
+ [Map enriched output to fields](cognitive-search-output-field-mapping.md)

articles/search/cognitive-search-skill-genai-prompt.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,13 @@ ms.date: 04/29/2025
1313

1414
# GenAI Prompt skill
1515

16-
> [!IMPORTANT]
17-
> **Preview** – The GenAI Prompt skill is first available in the **2025-05-01-preview** REST API. Skillsets that include this skill aren’t supported in earlier API versions.
16+
[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
1817

1918
The **GenAI (Generative AI) Prompt** skill executes a *chat completion* request against a Large Language Model (LLM) deployed in **Azure AI Foundry** or **Azure OpenAI Service**.
2019
Use it to summarize, transform, enrich, or extract structured data from text-only or *text + image* inputs to augment your data for higher relevant context in your index.
2120

21+
The GenAI Prompt skill is available in the **2025-05-01-preview** REST API. You can't use this skills in skillsets created with earlier API versions.
22+
2223
## Prerequisites
2324

2425
* A deployed chat-completion model (for example *gpt-4o* or any compatible Open Source Software (OSS) model) in Azure AI Foundry or Azure OpenAI.

articles/search/search-index-access-control-lists-and-rbac-push-api.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ ms.author: admayber
1111

1212
# Indexing Access Control Lists (ACLs) and Role-Based Access Control (RBAC) using REST API in Azure AI Search
1313

14+
[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
15+
1416
Indexing documents, along with their associated [Access Control Lists (ACLs)](/azure/storage/blobs/data-lake-storage-access-control) and container [Role-Based Access Control (RBAC) roles](/azure/role-based-access-control/overview), into an Azure AI Search index via the [REST API](/rest/api/searchservice/) offers fine-grained control over the indexing pipeline. This approach enables the inclusion of document entries with precise, document-level permissions directly within the index. This article explains how to use the REST API to index document-level permissions' metadata in Azure AI Search. This process prepares your index to query and enforce end-user permissions.
1517

1618
## Supported scenarios

articles/search/search-indexer-access-control-lists-and-role-based-access.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@ ms.author: wli
1111

1212
# Use an ADLS Gen2 indexer to ingest permission metadata and filter search results based on user access rights
1313

14-
> [!IMPORTANT]
15-
> This feature is in public preview. It's offered "as-is", under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) and supported on best effort only. Preview features aren't recommended for production workloads and aren't guaranteed to become generally available.
14+
[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
1615

1716
The permission model in Azure Data Lake Storage (ADLS) Gen2 can be used for per-user access to specific directories or files. Starting in 2025-05-01-preview, you can now include user permissions alongside document ingestion in Azure AI Search and use those permissions to control access to search results. If a user lacks permissions on a specific directory or file in ADLS Gen2, that user doesn't have access to the corresponding documents in Azure AI Search results.
1817

articles/search/semantic-how-to-enable-scoring-profiles.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ ms.date: 05/07/2025
1111

1212
# Integrating scoring profiles with semantic ranker in Azure AI Search
1313

14+
[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
15+
1416
Integrating [scoring profiles](index-add-scoring-profiles.md) with [semantic ranker](semantic-search-overview.md) is now possible in Azure AI Search. Semantic ranker adds a new field, `@search.rerankerBoostedScore`, to help you maintain consistent relevance and greater control over final ranking outcomes in your search pipeline.
1517

1618
Before this integration, scoring profiles only influenced the initial ranking phase of search results. The boost values they applied affected:

articles/search/tutorial-multimodal-index-embeddings-skill.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.date: 05/05/2025
1515

1616
# Tutorial: Index multimodal content using multimodal embedding and document layout skill
1717

18-
Multimodal plays an essential role in Gen AI apps and the user experience as it enables the extraction of information not only from text but also from complex images embedded within documents. In this Azure AI Search tutorial, learn how to build a multimodal retrieval pipeline that chunks data based on document structure, and uses a multimodal embedding model to vectorize text and images in a searchable index.
18+
Multimodal plays an essential role in generative AI apps and the user experience as it enables the extraction of information not only from text but also from complex images embedded within documents. In this Azure AI Search tutorial, learn how to build a multimodal retrieval pipeline that chunks data based on document structure, and uses a multimodal embedding model to vectorize text and images in a searchable index.
1919

2020
You’ll work with a 36-page PDF document that combines rich visual content—such as charts, infographics, and scanned pages—with traditional text. Using the [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md)(currently in public preview), you’ll extract both text and normalized images with its locationMetadata. Each modality is then embedded using the same [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md), which generates dense vector representations suitable for semantic and hybrid search scenarios.
2121

articles/search/tutorial-multimodal-index-image-verbalization-skill.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.date: 05/05/2025
1515

1616
# Tutorial: Index multimodal content using image verbalization and document layout skill
1717

18-
Multi-modality plays an essential role in Gen AI apps and the user experience as it enables the extraction of information not only from text but also from complex images embedded within documents. "In this Azure AI Search tutorial, learn how to build a multimodal retrieval pipeline that that chunks data based on document structure, and =uses image verbalization to describe images. Cropped images are stored in a knowledge store, and visual content is described in natural language and ingested alongside text in a searchable index.
18+
Multi-modality plays an essential role in generative AI apps and the user experience as it enables the extraction of information not only from text but also from complex images embedded within documents. "In this Azure AI Search tutorial, learn how to build a multimodal retrieval pipeline that that chunks data based on document structure, and =uses image verbalization to describe images. Cropped images are stored in a knowledge store, and visual content is described in natural language and ingested alongside text in a searchable index.
1919

2020
You’ll work with a 36-page PDF document that combines rich visual content—such as charts, infographics, and scanned pages—with traditional text. Using the [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md)(currently in public preview), you’ll extract both text and normalized images with its locationMetadata. Each image is passed to the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) (currently in public preview) to generate a concise textual description. These descriptions, along with the original document text, are then embedded into vector representations using Azure OpenAI’s text-embedding-3-large model. The result is a single index containing semantically searchable content from both modalities—text and verbalized images.
2121

articles/search/vector-search-multi-vector-fields.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ ms.date: 05/07/2025
1111

1212
# Multi-vector field support in Azure AI Search
1313

14+
[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
15+
1416
The multi-vector field support feature in Azure AI Search enables you to index multiple child vectors within a single document field. This feature is valuable for use cases like multi-modal data or long-form documents, where representing the content with a single vector would lead to loss of important detail.
1517

1618
## Understanding multi-vector field support

articles/search/whats-new.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Learn about the latest updates to Azure AI Search functionality, docs, and sampl
2828
| [Multivector support (preview)](vector-search-multi-vector-fields.md) | Indexing | Index multiple child vectors within a single document field. You can now use vector types in nested fields of complex collections, effectively allowing multiple vectors to be associated with a single document.|
2929
| [Scoring profiles with semantic ranking (preview)​](semantic-how-to-enable-scoring-profiles.md) | Relevance | Semantic ranker adds a new field, `@search.rerankerBoostedScore`, to help you maintain consistent relevance and greater control over final ranking outcomes in your search pipeline. |
3030
| [Logic Apps integration (preview)](search-how-to-index-logic-apps-indexers.md) | Indexing | Create an automated indexing pipeline that retrieves content using a logic app workflow. Use the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal to build an indexing pipeline based on Logic Apps. |
31-
| [Gen AI prompt skill (preview)](cognitive-search-skill-genai-prompt.md) | Skills | A new skill that connects to a large language model (LLM) for information, using a prompt you provide. With this skill, you can populate a searchable field using content from an LLM. A primary use case for this skill is *image verbalization*, using an LLM to describe images and send the description to a searchable field in your index. |
31+
| [GenAI prompt skill (preview)](cognitive-search-skill-genai-prompt.md) | Skills | A new skill that connects to a large language model (LLM) for information, using a prompt you provide. With this skill, you can populate a searchable field using content from an LLM. A primary use case for this skill is *image verbalization*, using an LLM to describe images and send the description to a searchable field in your index. |
3232
| Import and vectorize data wizard enhancements | Portal | This wizard provides two paths for creating and populating vector indexes: Retrieval Augmented Generation (RAG) and multimodal support. Logic apps integration is through the RAG path. |
3333
| [Index "description" support (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2025-05-01-preview&preserve-view=true#request-body) | REST | The latest preview API adds a description to an index. A description is useful in agentic solutions, where the agent reads the description to decide whether to run a query or move on to another index. |
3434
| [2025-05-01-preview](/rest/api/searchservice/operation-groups?view=rest-searchservice-2025-05-01-preview&preserve-view=true) | REST | New data plane preview REST API version providing programmatic access to the preview features announced in this release. |

0 commit comments

Comments
 (0)