Skip to content

Commit 42d4eb0

Browse files
Merge pull request #271693 from arv100kri/arjagann/adls-gen2-docs
Update ADLS Gen 2 public documentation to exclude container SAS support
2 parents 1e8342d + 41e87de commit 42d4eb0

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

articles/search/search-howto-index-azure-data-lake-storage.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -137,11 +137,6 @@ Indexers can connect to a blob container using the following connections.
137137
| `{ "connectionString" : "BlobEndpoint=https://<your account>.blob.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=b&sp=rl;" }` |
138138
| The SAS should have the list and read permissions on containers and objects (blobs in this case). |
139139

140-
| Container shared access signature |
141-
|-----------------------------------|
142-
| `{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }` |
143-
| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
144-
145140
> [!NOTE]
146141
> If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
147142

@@ -311,6 +306,12 @@ PUT /indexers/[indexer name]?api-version=2023-11-01
311306
|"failOnUnprocessableDocument" | true or false | If the indexer is unable to process a document of an otherwise supported content type, specify whether to continue or fail the job. |
312307
| "indexStorageMetadataOnlyForOversizedDocuments" | true or false | Oversized blobs are treated as errors by default. If you set this parameter to true, the indexer will try to index its metadata even if the content cannot be indexed. For limits on blob size, see [service Limits](search-limits-quotas-capacity.md). |
313308

309+
## Limitations
310+
311+
1. Unlike blob indexers, ADLS Gen2 indexers cannot utilize container level SAS tokens for enumerating and indexing content from a storage account. This is because the indexer makes a check to determine if the storage account has hierarchical namespaces enabled by calling the [Filesystem - Get properties API](https://learn.microsoft.com/rest/api/storageservices/datalakestoragegen2/filesystem/get-properties). For storage accounts where hierarchical namespaces are not enabled, customers are instead recommended to utilize [blob indexers](search-howto-indexing-azure-blob-storage.md) to ensure performant enumeration of blobs.
312+
313+
2. If the property `metadata_storage_path` is mapped to be the index key field, blobs are not guaranteed to get reindexed upon a directory rename. If you desire to reindex the blobs that are part of the renamed directories, update the `LastModified` timestamps for all of them.
314+
314315
## Next steps
315316

316317
You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:

0 commit comments

Comments
 (0)