Skip to content

Commit 1810b5e

Browse files
authored
Merge pull request #909 from MicrosoftDocs/main
10/18/2024 AM Publish
2 parents bee27f1 + e077e11 commit 1810b5e

11 files changed

+116
-109
lines changed

articles/ai-services/document-intelligence/versioning/v3-1-migration-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ Starting from v3.0, [Document Intelligence REST API](../quickstarts/get-started-
131131
132132
## Changes to the REST API endpoints
133133

134-
The v3.1 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the layout analysis (../prebuilt-layout) and prebuilt models.
134+
The v3.1 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the [layout analysis](../prebuilt/layout.md) and prebuilt models.
135135

136136
### POST request
137137

articles/ai-services/speech-service/high-definition-voices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Here's a comparison of features between Azure AI Speech HD voices, Azure OpenAI
4747
| **Deployment options** | Cloud only | Cloud only | Cloud, embedded, hybrid, and containers. |
4848
| **Real-time or batch synthesis** | Real-time only | Real-time and batch synthesis | Real-time and batch synthesis |
4949
| **Latency** | Less than 300 ms | Greater than 500 ms | Less than 300 ms |
50-
| **Sample rate of synthesized audio** | 8, 16, 22.05, 24, 44.1, and 48 kHz | 8, 16, 24, and 48 kHz | 8, 16, 22.05, 24, 44.1, and 48 kHz |
50+
| **Sample rate of synthesized audio** | 8, 16, 24, and 48 kHz | 8, 16, 24, and 48 kHz | 8, 16, 24, and 48 kHz |
5151
| **Speech output audio format** | opus, mp3, pcm, truesilk | opus, mp3, pcm, truesilk | opus, mp3, pcm, truesilk |
5252

5353
## Supported Azure AI Speech HD voices

articles/search/cognitive-search-concept-image-scenarios.md

Lines changed: 52 additions & 52 deletions
Large diffs are not rendered by default.

articles/search/cognitive-search-output-field-mapping.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Map enrichments in indexers
2+
title: Map enriched output to fields in a search index
33
titleSuffix: Azure AI Search
44
description: Export the enriched content created by a skillset by mapping its output fields to fields in a search index.
55
author: HeidiSteen
@@ -8,12 +8,12 @@ ms.service: azure-ai-search
88
ms.custom:
99
- ignite-2023
1010
ms.topic: how-to
11-
ms.date: 07/30/2024
11+
ms.date: 10/15/2024
1212
---
1313

1414
# Map enriched output to fields in a search index in Azure AI Search
1515

16-
![Indexer Stages](./media/cognitive-search-output-field-mapping/indexer-stages-output-field-mapping.png "indexer stages")
16+
:::image type="content" source="media/cognitive-search-output-field-mapping/indexer-stages-output-field-mapping.png" alt-text="Diagram of the Indexer Stages with Output Field Mappings highlighted.":::
1717

1818
This article explains how to set up *output field mappings*, defining a data path between in-memory data generated during [skillset processing](cognitive-search-concept-intro.md), and target fields in a search index. During indexer execution, skills-generated information exists in memory only. To persist this information in a search index, you need to tell the indexer where to send the data.
1919

@@ -35,14 +35,14 @@ In contrast with a [`fieldMappings`](search-indexer-field-mappings.md) definitio
3535

3636
- Indexer, index, data source, and skillset.
3737

38-
- Index fields must be simple or top-level fields. You can't output to a [complex type](search-howto-complex-data-types.md), but if you have a complex type, you can use an output field definition to flatten parts of the complex type and send them to a collection in a search index.
38+
- Index fields must be simple or top-level fields. You can't output to a [complex type](search-howto-complex-data-types.md). However, if you have a complex type, you can use an output field definition to flatten parts of the complex type and send them to a collection in a search index.
3939

4040
## When to use an output field mapping
4141

4242
Output field mappings are required if your indexer has an attached [skillset](cognitive-search-working-with-skillsets.md) that creates new information that you want in your index. Examples include:
4343

4444
- Vectors from embedding skills
45-
- OCR text from image skills
45+
- Optical character recognition (OCR) text from image skills
4646
- Locations, organizations, or people from entity recognition skills
4747

4848
Output field mappings can also be used to:
@@ -95,13 +95,13 @@ You can use the REST API or an Azure SDK to define output field mappings.
9595

9696
| Property | Description |
9797
|----------|-------------|
98-
| sourceFieldName | Required. Specifies a path to enriched content. An example might be `/document/content`. See [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md) for path syntax and examples. |
98+
| sourceFieldName | Required. Specifies a path to enriched content. An example might be */document/content*. See [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md) for path syntax and examples. |
9999
| targetFieldName | Optional. Specifies the search field that receives the enriched content. Target fields must be top-level simple fields or collections. It can't be a path to a subfield in a complex type. If you want to retrieve specific nodes in a complex structure, you can [flatten individual nodes](#flattening-information-from-complex-types) in memory, and then send the output to a string collection in your index. |
100100
| mappingFunction | Optional. Adds extra processing provided by [mapping functions](search-indexer-field-mappings.md#mappingFunctions) supported by indexers. For enrichment nodes, encoding and decoding are the most commonly used functions. |
101101

102102
1. The `targetFieldName` is always the name of the field in the search index.
103103

104-
1. The `sourceFieldName` is a path to a node in the enriched document. It's the output of a skill. The path always starts with `/document`, and if you're indexing from a blob, the second element of the path is `/content`. The third element is the value produced by the skill. For more information and examples, see [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md).
104+
1. The `sourceFieldName` is a path to a node in the enriched document. It's the output of a skill. The path always starts with */document*, and if you're indexing from a blob, the second element of the path is */content*. The third element is the value produced by the skill. For more information and examples, see [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md).
105105

106106
This example adds entities and sentiment labels extracted from a blob's content property to fields in a search index.
107107

@@ -136,7 +136,7 @@ You can use the REST API or an Azure SDK to define output field mappings.
136136

137137
### [**.NET SDK (C#)**](#tab/csharp)
138138

139-
In the Azure SDK for .NET, use the [OutputFieldMappingEntry](/dotnet/api/azure.search.documents.indexes.models.outputfieldmappingentry) class that provides "Name" and "TargetFieldName" properties and an optional "MappingFunction" reference.
139+
In the Azure SDK for .NET, use the [OutputFieldMappingEntry](/dotnet/api/azure.search.documents.indexes.models.outputfieldmappingentry) class that provides `Name` and `TargetFieldName` properties and an optional `MappingFunction` reference.
140140

141141
Specify output field mappings when constructing the indexer, or later by directly setting [SearchIndexer.OutputFieldMappings](/dotnet/api/azure.search.documents.indexes.models.searchindexer.outputfieldmappings). The following C# example sets the output field mappings when constructing an indexer.
142142

@@ -177,7 +177,7 @@ Assume a skillset that generates embeddings for a vector field, and an index tha
177177
]
178178
```
179179

180-
The source field path is skill output. In this example, the output is `text_vector`. Target name is an optional property. If you don't give the output mapping a target name, the path would be `embedding` or more precisely, `/document/content/embedding`.
180+
The source field path is skill output. In this example, the output is *text_vector*. Target name is an optional property. If you don't give the output mapping a target name, the path would be *embedding* or more precisely, */document/content/embedding*.
181181

182182
```json
183183
{
@@ -254,7 +254,7 @@ Here's an example of a document in Azure Cosmos DB with nested JSON:
254254
}
255255
```
256256

257-
If you wanted to fully index the above source document, you'd create an index definition where the field names, levels, and types are reflected as a complex type. Because field mappings aren't supported for complex types in the search index, your index definition must mirror the source document.
257+
If you wanted to fully index this source document, you'd create an index definition where the field names, levels, and types are reflected as a complex type. Because field mappings aren't supported for complex types in the search index, your index definition must mirror the source document.
258258

259259
```json
260260
{
@@ -283,7 +283,7 @@ If you wanted to fully index the above source document, you'd create an index de
283283
}
284284
```
285285

286-
Here's a sample indexer definition that executes the import (notice there are no field mappings and no skillset).
286+
Here's a sample indexer definition that executes the import. Notice there are no field mappings and no skillset.
287287

288288
```json
289289
{
@@ -304,7 +304,7 @@ The result is the following sample search document, similar to the original in A
304304
"value": [
305305
{
306306
"@search.score": 1,
307-
"id": "240a98f5-90c9-406b-a8c8-f50ff86f116c",
307+
"id": "11bb11bb-cc22-dd33-ee44-55ff55ff55ff",
308308
"palette": "primary colors",
309309
"colors": [
310310
{
@@ -338,9 +338,9 @@ The result is the following sample search document, similar to the original in A
338338

339339
An alternative rendering in a search index is to flatten individual nodes in the source's nested structure into a string collection in a search index.
340340

341-
To accomplish this task, you'll need an `outputFieldMappings` that maps an in-memory node to a string collection in the index. Although output field mappings primarily apply to skill outputs, you can also use them to address nodes after ["document cracking"](search-indexer-overview.md#stage-1-document-cracking) where the indexer opens a source document and reads it into memory.
341+
To accomplish this task, you'll need an `outputFieldMappings` that maps an in-memory node to a string collection in the index. Although output field mappings primarily apply to skill outputs, you can also use them to address nodes after [document cracking](search-indexer-overview.md#stage-1-document-cracking) where the indexer opens a source document and reads it into memory.
342342

343-
Below is a sample index definition, using string collections to receive flattened output:
343+
The following sample index definition uses string collections to receive flattened output:
344344

345345
```json
346346
{
@@ -378,14 +378,14 @@ Here's the sample indexer definition, using `outputFieldMappings` to associate t
378378
}
379379
```
380380

381-
Results from the above definition are as follows. Simplifying the structure loses context in this case. There's no longer any associations between a given color and the mediums it's available in. However, depending on your scenario, a result similar to the one shown below might be exactly what you need.
381+
Results from the definition are as follows. Simplifying the structure loses context in this case. There's no longer any associations between a given color and the mediums it's available in. However, depending on your scenario, a result similar to the following example might be exactly what you need.
382382

383383
```json
384384
{
385385
"value": [
386386
{
387387
"@search.score": 1,
388-
"id": "240a98f5-90c9-406b-a8c8-f50ff86f116c",
388+
"id": "11bb11bb-cc22-dd33-ee44-55ff55ff55ff",
389389
"palette": "primary colors",
390390
"color_names": [
391391
"blue",
@@ -402,8 +402,8 @@ Results from the above definition are as follows. Simplifying the structure lose
402402
}
403403
```
404404

405-
## See also
405+
## Related content
406406

407-
+ [Define field mappings in a search indexer](search-indexer-field-mappings.md)
407+
+ [Field mappings and transformations](search-indexer-field-mappings.md)
408408
+ [AI enrichment overview](cognitive-search-concept-intro.md)
409-
+ [Skillset overview](cognitive-search-working-with-skillsets.md)
409+
+ [Skillset concepts](cognitive-search-working-with-skillsets.md)

0 commit comments

Comments
 (0)