You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/healthcare-apis/fhir/copy-to-synapse.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.author: irenejoseph
10
10
---
11
11
# Copy data from FHIR service to Azure Synapse Analytics
12
12
13
-
In this article, you’ll learn three ways to copy data from the FHIR service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
13
+
In this article, you learn three ways to copy data from the FHIR® service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
14
14
15
15
* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) OSS tool
16
16
* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool
@@ -32,15 +32,15 @@ Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipel
32
32
> [!Note]
33
33
> [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
34
34
35
-
The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
35
+
The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from a CDM folder to a Synapse dedicated SQL pool.
36
36
37
37
This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
38
38
39
39
Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) for installation and usage instructions.
40
40
41
41
## Loading exported data to Synapse using T-SQL
42
42
43
-
In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Subsequently, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
43
+
In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Then, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
44
44
45
45
:::image type="content" source="media/export-data/export-azure-storage-option.png" alt-text="Azure storage to Synapse using $export." lightbox="media/export-data/export-azure-storage-option.png":::
46
46
@@ -62,7 +62,7 @@ After configuring your FHIR server, you can follow the [documentation](./export-
62
62
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}
63
63
```
64
64
65
-
You can also use `_type` parameter in the `$export` call above to restrict the resources that you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
65
+
You can also use `_type` parameter in the preceding `$export` call to restrict the resources that you want to export. For example, the following call exports only `Patient`, `MedicationRequest`, and `Observation` resources:
66
66
67
67
```rest
68
68
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
@@ -75,7 +75,7 @@ For more information on the different parameters supported, check out our `$expo
75
75
76
76
#### Creating a Synapse workspace
77
77
78
-
Before using Synapse, you'll need a Synapse workspace. You'll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
78
+
Before using Synapse, you'll need a Synapse workspace. Create an Azure Synapse Analytics service on Azure portal. More step-by-step guidance can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
79
79
80
80
After creating a workspace, you can view your workspace in Synapse Studio by signing into your workspace on [https://web.azuresynapse.net](https://web.azuresynapse.net), or launching Synapse Studio in the Azure portal.
81
81
@@ -92,7 +92,7 @@ Now that you have a linked service between your ADL Gen 2 storage and Synapse, y
92
92
93
93
#### Decide between serverless and dedicated SQL pool
94
94
95
-
Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
95
+
Azure Synapse Analytics offers two different SQL pools: serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
96
96
97
97
#### Using serverless SQL pool
98
98
@@ -117,9 +117,9 @@ WITH (
117
117
)
118
118
```
119
119
120
-
In the query above, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
120
+
In the preceding query, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
121
121
122
-
You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as shown below:
122
+
You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as follows.
123
123
124
124
```sql
125
125
-- Create External data source where the parquet file will be written
Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
151
151
152
-
The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
152
+
The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the following example query, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
153
153
154
154
```sql
155
155
-- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
@@ -167,7 +167,7 @@ FIELDTERMINATOR = '0x00'
167
167
GO
168
168
```
169
169
170
-
Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
170
+
Once you have the JSON rows in the preceding `StagingPatient` table, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
171
171
172
172
```sql
173
173
SELECT RES.*
@@ -197,4 +197,4 @@ Next, you can learn about how you can de-identify your FHIR data while exporting
Copy file name to clipboardExpand all lines: articles/healthcare-apis/fhir/customer-managed-keys.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.author: kesheth
11
11
12
12
# Best practices for using customer-managed keys for the FHIR service
13
13
14
-
Customer-managed keys (CMK) are encryption keys that you create and manage in your own key store. By using CMK, you can have more flexibility and control over the encryption and access of your organization’s data. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK and then use the keys to encrypt the data stored by the FHIR® service.
14
+
Customer-managed keys (CMK) are encryption keys that you create and manage in your own key store. By using CMK, you have more flexibility and control over the encryption and access of your organization’s data. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK, and then use the keys to encrypt the data stored by the FHIR® service.
15
15
16
16
## Rotate keys often
17
17
@@ -21,7 +21,7 @@ To rotate the key by generating a new version of the key, use the 'az keyvault k
21
21
22
22
## Update the FHIR service after changing a managed identity
23
23
24
-
If you change the managed identity in any way, such as moving your FHIR service to a different tenant or subscription, the FHIR service isn't able to access your keys until you update the service manually with an ARM template deployment. For steps, see [Use an ARM template to update the encryption key](configure-customer-managed-keys.md#update-the-key-by-using-an-arm-template).
24
+
If you change the managed identity in any way, such as moving your FHIR service to a different tenant or subscription, the FHIR service isn't able to access your keys. You must update the service manually with an ARM template deployment. For steps, see [Use an ARM template to update the encryption key](configure-customer-managed-keys.md#update-the-key-by-using-an-arm-template).
Copy file name to clipboardExpand all lines: articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
+8-10Lines changed: 8 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,24 +12,22 @@ ms.date: 06/06/2022
12
12
13
13
# Tutorial for Da Vinci Drug Formulary
14
14
15
-
In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
15
+
In this tutorial, we'll walk through setting up the FHIR® service in Azure Health Data Services (FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
16
16
17
17
## Touchstone capability statement
18
18
19
-
The first test that we'll focus on is testing FHIR service against the [Da Vinci Drug Formulary capability
20
-
statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to
21
-
missing search parameters and missing profiles.
19
+
The first test focuses on testing the FHIR service against the [Da Vinci Drug Formulary capability
20
+
statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test fails due to missing search parameters and missing profiles.
22
21
23
22
### Define search parameters
24
23
25
-
As part of the Da Vinci Drug Formulary IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the FormularyDrug resource. All three of these are tested in the
26
-
capability statement.
24
+
As part of the Da Vinci Drug Formulary IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the FormularyDrug resource. All three of these are tested in the capability statement.
The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without any more updates.
30
+
The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without updates.
33
31
34
32
### Store profiles
35
33
@@ -40,13 +38,13 @@ Outside of defining search parameters, the only other update you need to make to
40
38
41
39
### Sample rest file
42
40
43
-
To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run:
41
+
To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps previously outlined in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run:
44
42
45
43
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-test-script-execution.png" alt-text="Da Vinci test script execution.":::
46
44
47
45
## Touchstone query test
48
46
49
-
The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
47
+
The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG, which you can use to create the resources and test against.
50
48
51
49
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-test-execution-results.png" alt-text="Da Vinci test execution results.":::
52
50
@@ -57,4 +55,4 @@ In this tutorial, we walked through how to pass the Da Vinci Payer Data Exchange
57
55
>[!div class="nextstepaction"]
58
56
>[Da Vinci PDex](davinci-pdex-tutorial.md)
59
57
60
-
FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
0 commit comments