You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
+ Tables containing text. If you have binary data, you can include[AI enrichment](cognitive-search-concept-intro.md) for image analysis.
25
+
+ Tables containing text. If you have binary data, consider[AI enrichment](cognitive-search-concept-intro.md) for image analysis.
26
26
27
-
+ Read permissions to access Azure Storage. A "full access" connection string includes a key that gives access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions.
27
+
+ Read permissions on Azure Storage. A "full access" connection string includes a key that gives access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions.
28
28
29
-
+A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer.
29
+
+Use a REST client, such as [Postman app](https://www.postman.com/downloads/), if you want to formulate REST calls similar to the ones shown in this article.
30
30
31
31
## Define the data source
32
32
33
-
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is an independent resource that can be used by multiple indexers.
33
+
The data source definition specifies the source data to index, credentials, and policies for change detection. A data source is an independent resource that can be used by multiple indexers.
34
34
35
35
1.[Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
36
36
@@ -92,7 +92,7 @@ Indexers can connect to a table using the following connections.
92
92
| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
93
93
94
94
> [!NOTE]
95
-
> If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
95
+
> If you use SAS credentials, you'll need to update the data source credentials periodically with renewed signatures to prevent their expiration. When SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
96
96
97
97
<aname="Performance"></a>
98
98
@@ -133,9 +133,9 @@ In a [search index](search-what-is-an-index.md), add fields to accept the conten
133
133
134
134
1. Create a document key field ("key": true), but allow the indexer to populate it automatically. A table indexer populates the key field with concatenated partition and row keys from the table. For example, if a row’s PartitionKey is `1` and RowKey is `1_123`, then the key value is `11_123`. If the partition key is null, just the row key is used.
135
135
136
-
If you're using the Import data wizard to create the index, the portal infers a "Key" field for the search index and uses implicit field mapping to connect the source and destination fields. You don't have to add the field yourself, and you don't need to set up a field mapping.
136
+
If you're using the Import data wizard to create the index, the portal infers a "Key" field for the search index and uses an implicit field mapping to connect the source and destination fields. You don't have to add the field yourself, and you don't need to set up a field mapping.
137
137
138
-
If you're using the REST APIs and you want implicit field mappings, create and name the document key field "Key" in the search index definition as shown in the previous step (`{ "name": "Key", "type": "Edm.String", "key": true, "searchable": false }`). The indexer populates the Key field automatically.
138
+
If you're using the REST APIs and you want implicit field mappings, create and name the document key field "Key" in the search index definition as shown in the previous step (`{ "name": "Key", "type": "Edm.String", "key": true, "searchable": false }`). The indexer populates the Key field automatically, with no field mappings required.
139
139
140
140
If you don't want a field named "Key" in your search index, add an explicit field mapping in the indexer definition with the field name you want, setting the source field to "Key":
141
141
@@ -152,7 +152,7 @@ In a [search index](search-what-is-an-index.md), add fields to accept the conten
152
152
153
153
:::image type="content" source="media/search-howto-indexing-tables/table.png" alt-text="Screenshot of table content in Storage browser." border="true":::
154
154
155
-
Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md).
155
+
Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md). When names and types are the same, the indexer can determine the data path automatically.
156
156
157
157
## Configure and run the table indexer
158
158
@@ -214,8 +214,8 @@ The response includes status and the number of items processed. It should look s
214
214
"lastResult": {
215
215
"status":"success",
216
216
"errorMessage":null,
217
-
"startTime":"2022-02-21T00:23:24.957Z",
218
-
"endTime":"2022-02-21T00:36:47.752Z",
217
+
"startTime":"2023-02-21T00:23:24.957Z",
218
+
"endTime":"2023-02-21T00:36:47.752Z",
219
219
"errors":[],
220
220
"itemsProcessed":1599501,
221
221
"itemsFailed":0,
@@ -227,8 +227,8 @@ The response includes status and the number of items processed. It should look s
227
227
{
228
228
"status":"success",
229
229
"errorMessage":null,
230
-
"startTime":"2022-02-21T00:23:24.957Z",
231
-
"endTime":"2022-02-21T00:36:47.752Z",
230
+
"startTime":"2023-02-21T00:23:24.957Z",
231
+
"endTime":"2023-02-21T00:36:47.752Z",
232
232
"errors":[],
233
233
"itemsProcessed":1599501,
234
234
"itemsFailed":0,
@@ -244,7 +244,7 @@ Execution history contains up to 50 of the most recently completed executions, w
244
244
245
245
## Next steps
246
246
247
-
You can now[run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
247
+
Learn more about how to[run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
248
248
249
-
+[Index large data sets](search-howto-large-index.md)
250
-
+[Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
249
+
+[Tutorial: Index JSON blobs from Azure Storage](search-semi-structured-data.md)
250
+
+[Tutorial: Index encrypted blobs in Azure Storage](search-howto-index-encrypted-blobs.md)
0 commit comments