Skip to content

Commit d3e94c9

Browse files
committed
Edits
1 parent 99c24de commit d3e94c9

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

articles/search/search-howto-indexing-azure-tables.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ This article supplements [**Create an indexer**](search-howto-create-indexers.md
2222

2323
+ [Azure Table Storage](../storage/tables/table-storage-overview.md)
2424

25-
+ Tables containing text. If you have binary data, you can include [AI enrichment](cognitive-search-concept-intro.md) for image analysis.
25+
+ Tables containing text. If you have binary data, consider [AI enrichment](cognitive-search-concept-intro.md) for image analysis.
2626

27-
+ Read permissions to access Azure Storage. A "full access" connection string includes a key that gives access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions.
27+
+ Read permissions on Azure Storage. A "full access" connection string includes a key that gives access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions.
2828

29-
+ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer.
29+
+ Use a REST client, such as [Postman app](https://www.postman.com/downloads/), if you want to formulate REST calls similar to the ones shown in this article.
3030

3131
## Define the data source
3232

33-
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is an independent resource that can be used by multiple indexers.
33+
The data source definition specifies the source data to index, credentials, and policies for change detection. A data source is an independent resource that can be used by multiple indexers.
3434

3535
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
3636

@@ -92,7 +92,7 @@ Indexers can connect to a table using the following connections.
9292
| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
9393

9494
> [!NOTE]
95-
> If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
95+
> If you use SAS credentials, you'll need to update the data source credentials periodically with renewed signatures to prevent their expiration. When SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
9696
9797
<a name="Performance"></a>
9898

@@ -133,9 +133,9 @@ In a [search index](search-what-is-an-index.md), add fields to accept the conten
133133
134134
1. Create a document key field ("key": true), but allow the indexer to populate it automatically. A table indexer populates the key field with concatenated partition and row keys from the table. For example, if a row’s PartitionKey is `1` and RowKey is `1_123`, then the key value is `11_123`. If the partition key is null, just the row key is used.
135135
136-
If you're using the Import data wizard to create the index, the portal infers a "Key" field for the search index and uses implicit field mapping to connect the source and destination fields. You don't have to add the field yourself, and you don't need to set up a field mapping.
136+
If you're using the Import data wizard to create the index, the portal infers a "Key" field for the search index and uses an implicit field mapping to connect the source and destination fields. You don't have to add the field yourself, and you don't need to set up a field mapping.
137137
138-
If you're using the REST APIs and you want implicit field mappings, create and name the document key field "Key" in the search index definition as shown in the previous step (`{ "name": "Key", "type": "Edm.String", "key": true, "searchable": false }`). The indexer populates the Key field automatically.
138+
If you're using the REST APIs and you want implicit field mappings, create and name the document key field "Key" in the search index definition as shown in the previous step (`{ "name": "Key", "type": "Edm.String", "key": true, "searchable": false }`). The indexer populates the Key field automatically, with no field mappings required.
139139
140140
If you don't want a field named "Key" in your search index, add an explicit field mapping in the indexer definition with the field name you want, setting the source field to "Key":
141141
@@ -152,7 +152,7 @@ In a [search index](search-what-is-an-index.md), add fields to accept the conten
152152

153153
:::image type="content" source="media/search-howto-indexing-tables/table.png" alt-text="Screenshot of table content in Storage browser." border="true":::
154154

155-
Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md).
155+
Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md). When names and types are the same, the indexer can determine the data path automatically.
156156

157157
## Configure and run the table indexer
158158

@@ -214,8 +214,8 @@ The response includes status and the number of items processed. It should look s
214214
"lastResult": {
215215
"status":"success",
216216
"errorMessage":null,
217-
"startTime":"2022-02-21T00:23:24.957Z",
218-
"endTime":"2022-02-21T00:36:47.752Z",
217+
"startTime":"2023-02-21T00:23:24.957Z",
218+
"endTime":"2023-02-21T00:36:47.752Z",
219219
"errors":[],
220220
"itemsProcessed":1599501,
221221
"itemsFailed":0,
@@ -227,8 +227,8 @@ The response includes status and the number of items processed. It should look s
227227
{
228228
"status":"success",
229229
"errorMessage":null,
230-
"startTime":"2022-02-21T00:23:24.957Z",
231-
"endTime":"2022-02-21T00:36:47.752Z",
230+
"startTime":"2023-02-21T00:23:24.957Z",
231+
"endTime":"2023-02-21T00:36:47.752Z",
232232
"errors":[],
233233
"itemsProcessed":1599501,
234234
"itemsFailed":0,
@@ -244,7 +244,7 @@ Execution history contains up to 50 of the most recently completed executions, w
244244

245245
## Next steps
246246

247-
You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
247+
Learn more about how to [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
248248

249-
+ [Index large data sets](search-howto-large-index.md)
250-
+ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
249+
+ [Tutorial: Index JSON blobs from Azure Storage](search-semi-structured-data.md)
250+
+ [Tutorial: Index encrypted blobs in Azure Storage](search-howto-index-encrypted-blobs.md)

0 commit comments

Comments
 (0)