You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/healthcare-apis/dicom/dicom-storage-indexing.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.author: wisuga
11
11
12
12
# Azure Data Lake Storage Indexing (Preview)
13
13
14
-
The [DICOM® service](overview.md) automatically uploads DICOM files to Azure Data Lake Storage (ADLS) when using [STOW-RS](dicom-services-conformance-statement-v2.md#store-stow-rs). That way, users can query their data either using [DICOMweb™ APIs](dicomweb-standard-apis-with-dicom-services.md), like [WADO-RS](dicom-services-conformance-statement-v2.md#retrieve-wado-rs), or [Azure Blob/Data Lake APIs](../../storage/blobs/storage-blob-upload.md). However, with storage indexing the DICOM service automatically indexes DICOM files after they are uploaded directly to the ADLS Gen 2 file system. Whether the files were uploaded using STOW-RS, an Azure Blob SDK, or even [AzCopy](../../storage/common/storage-use-azcopy-v10.md), they can be accessed using DICOMweb™ or ADLS Gen 2 APIs.
14
+
The [DICOM® service](overview.md) automatically uploads DICOM files to Azure Data Lake Storage (ADLS) when using [STOW-RS](dicom-services-conformance-statement-v2.md#store-stow-rs). That way, users can query their data either using [DICOMweb™ APIs](dicomweb-standard-apis-with-dicom-services.md), like [WADO-RS](dicom-services-conformance-statement-v2.md#retrieve-wado-rs), or [Azure Blob/Data Lake APIs](../../storage/blobs/storage-blob-upload.md). However, with storage indexing, the DICOM service automatically indexes DICOM files after they're uploaded directly to the ADLS Gen 2 file system. Whether the files were uploaded using STOW-RS, an Azure Blob SDK, or even [AzCopy](../../storage/common/storage-use-azcopy-v10.md), they can be accessed using DICOMweb™ or ADLS Gen 2 APIs.
15
15
16
16
## Prerequisites
17
17
@@ -24,22 +24,22 @@ The DICOM service indexes an ADLS Gen 2 file system by reacting to [Blob or Data
24
24
25
25
### Create the Destination for Storage Events
26
26
27
-
First, create a storage queue in the same Azure Storage Account connected to the DICOM service. The DICOM service also needs access to the queue; it needs to be able to both dequeue and enqueue messages, including those for errors and broken-down complex tasks. So, make sure the same managed identity used by the DICOM service, either user-assigned or system-assigned, has the [**Storage Queue Data Contributor**](../../role-based-access-control/built-in-roles.md#storage) role assigned.
27
+
First, create a storage queue in the same Azure Storage Account connected to the DICOM service. The DICOM service also needs access to the queue; it needs to be able to both dequeue and enqueue messages, including messages for errors and broken-down complex tasks. So, make sure the same managed identity used by the DICOM service, either user-assigned or system-assigned, has the [**Storage Queue Data Contributor**](../../role-based-access-control/built-in-roles.md#storage) role assigned.
28
28
29
29
### Publish Storage Events to the Queue
30
30
31
-
With the Storage Queue in place, events must be published from the Storage Account to an [Azure Event Grid System Topic](../../event-grid/system-topics.md) and routed to queue using an [Azure Event Grid Subscription](../../event-grid/create-view-manage-event-subscriptions.md). Before creating the event subscription, be sure to grant the role [**Storage Queue Data Message Sender**](../../role-based-access-control/built-in-roles.md#storage) to the event subscription so that it's authorized to send messages. The event subscription can either use a [user-assigned or system-assigned managed identity from the system topic](../../event-grid/enable-identity-system-topics.md) to authenticate its operations.
31
+
With the Storage Queue in place, events must be published from the Storage Account to an [Azure Event Grid System Topic](../../event-grid/system-topics.md) and routed to queue using an [Azure Event Grid Subscription](../../event-grid/create-view-manage-event-subscriptions.md). Before creating the event subscription, be sure to grant the role [**Storage Queue Data Message Sender**](../../role-based-access-control/built-in-roles.md#storage) to the event subscription; the event subscription needs permissions to enqueue messages. The event subscription can either use a [user-assigned or system-assigned managed identity from the system topic](../../event-grid/enable-identity-system-topics.md) to authenticate its operations.
32
32
33
33
> [!NOTE]
34
-
> By default, event subscriptions send all of the subscribed event types to their designated output. However, while the DICOM service gracefully handles any message, it will only process ones that meet the following criteria:
34
+
> By default, event subscriptions send all of the subscribed event types to their designated output. However, while the DICOM service gracefully handles any message, it can only successfully process ones that meet the following criteria:
35
35
>- The message must be a Base64 [CloudEvent](../../event-grid/event-schema-subscriptions.md#cloud-event-schema)
36
36
>- The event type must be one of the following event types:
37
37
>-`Microsoft.Storage.BlobCreated`
38
38
>-`Microsoft.Storage.BlobDeleted`
39
39
>- The file system must be the same one configured for the DICOM service
40
40
>- The file path must be within `AHDS/<workspace-name>/dicom/<dicom-service-name>[/<partition-name>]`
41
41
>- The file must be a DICOM file as defined in Part 10 of the DICOM standard
42
-
>- The operation must not have been performed by the DICOM service itself
42
+
>- The operation cannot be performed the DICOM service itself
43
43
44
44
The event subscription can be configured to filter out irrelevant data to avoid unnecessary processing and billing. Make sure to configure filter such that:
45
45
- The *subject* must begin with `/blobServices/default/containers/<file-system-name>/blobs/AHDS/<workspace-name>/dicom/<dicom-service-name>/`
@@ -319,7 +319,7 @@ The following example ARM template may be deployed using the [Azure CLI](../../a
319
319
320
320
## Diagnosing Issues
321
321
322
-
:::image type="content" source="media/storage-indexing/diagnostic-logs.png" alt-text="A screenshot of the Azure portal showing a Kusto Query Language (KQL) query for the AHDSDicomAuditLogs table. The example query is filtering for all logs where OperationName is the string index-storage. Underneath the KQL query is a table of results." lightbox="media/storage-indexing/diagnostic-logs.png":::
322
+
:::image type="content" source="media/storage-indexing/diagnostic-logs.png" alt-text="A screenshot of the Azure portal showing a Kusto Query Language (KQL) query for the AHDSDicomAuditLogs table. The example query is filtering for all logs where OperationName is the string index-storage. A table of the query results is underneath." lightbox="media/storage-indexing/diagnostic-logs.png":::
323
323
324
324
If there's an error when processing an event, the problematic event is enqueued in a "poison queue" called `<queue-name>-poison` in the same storage account. Details about every processed event can be found in the `AHDSDicomAuditLogs` and `AHDSDicomDiagnosticLogs` tables by filtering for all logs where `OperationName = 'index-storage'`. The audit logs only record when the operation started and completed whereas the diagnostic table provides details about each operation including any errors, if any. Operations may be correlated across the tables using `CorrelationId`.
0 commit comments