Skip to content

Commit c2dfe61

Browse files
Merge pull request #286317 from KendalBond007/FHIReditsBatch5_Sep24
FHIReditsBatch5_Sep24
2 parents 2d1920a + bd73da7 commit c2dfe61

File tree

5 files changed

+32
-34
lines changed

5 files changed

+32
-34
lines changed

articles/healthcare-apis/fhir/copy-to-synapse.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.author: irenejoseph
1010
---
1111
# Copy data from FHIR service to Azure Synapse Analytics
1212

13-
In this article, you’ll learn three ways to copy data from the FHIR service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
13+
In this article, you learn three ways to copy data from the FHIR® service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
1414

1515
* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) OSS tool
1616
* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool
@@ -32,15 +32,15 @@ Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipel
3232
> [!Note]
3333
> [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
3434
35-
The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
35+
The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from a CDM folder to a Synapse dedicated SQL pool.
3636

3737
This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
3838

3939
Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) for installation and usage instructions.
4040

4141
## Loading exported data to Synapse using T-SQL
4242

43-
In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Subsequently, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
43+
In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Then, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
4444

4545
:::image type="content" source="media/export-data/export-azure-storage-option.png" alt-text="Azure storage to Synapse using $export." lightbox="media/export-data/export-azure-storage-option.png":::
4646

@@ -62,7 +62,7 @@ After configuring your FHIR server, you can follow the [documentation](./export-
6262
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}
6363
```
6464

65-
You can also use `_type` parameter in the `$export` call above to restrict the resources that you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
65+
You can also use `_type` parameter in the preceding `$export` call to restrict the resources that you want to export. For example, the following call exports only `Patient`, `MedicationRequest`, and `Observation` resources:
6666

6767
```rest
6868
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
@@ -75,7 +75,7 @@ For more information on the different parameters supported, check out our `$expo
7575

7676
#### Creating a Synapse workspace
7777

78-
Before using Synapse, you'll need a Synapse workspace. You'll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
78+
Before using Synapse, you'll need a Synapse workspace. Create an Azure Synapse Analytics service on Azure portal. More step-by-step guidance can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
7979

8080
After creating a workspace, you can view your workspace in Synapse Studio by signing into your workspace on [https://web.azuresynapse.net](https://web.azuresynapse.net), or launching Synapse Studio in the Azure portal.
8181

@@ -92,7 +92,7 @@ Now that you have a linked service between your ADL Gen 2 storage and Synapse, y
9292

9393
#### Decide between serverless and dedicated SQL pool
9494

95-
Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
95+
Azure Synapse Analytics offers two different SQL pools: serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
9696

9797
#### Using serverless SQL pool
9898

@@ -117,9 +117,9 @@ WITH (
117117
)
118118
```
119119

120-
In the query above, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
120+
In the preceding query, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
121121

122-
You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as shown below:
122+
You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as follows.
123123

124124
```sql
125125
-- Create External data source where the parquet file will be written
@@ -149,7 +149,7 @@ OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}
149149

150150
Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
151151

152-
The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
152+
The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the following example query, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
153153

154154
```sql
155155
-- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
@@ -167,7 +167,7 @@ FIELDTERMINATOR = '0x00'
167167
GO
168168
```
169169

170-
Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
170+
Once you have the JSON rows in the preceding `StagingPatient` table, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
171171

172172
```sql
173173
SELECT RES.*
@@ -197,4 +197,4 @@ Next, you can learn about how you can de-identify your FHIR data while exporting
197197
>[!div class="nextstepaction"]
198198
>[Exporting de-identified data](./de-identified-export.md)
199199
200-
FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
200+
[!INCLUDE [FHIR trademark statement](../includes/healthcare-apis-fhir-trademark.md)]

articles/healthcare-apis/fhir/customer-managed-keys.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.author: kesheth
1111

1212
# Best practices for using customer-managed keys for the FHIR service
1313

14-
Customer-managed keys (CMK) are encryption keys that you create and manage in your own key store. By using CMK, you can have more flexibility and control over the encryption and access of your organization’s data. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK and then use the keys to encrypt the data stored by the FHIR® service.
14+
Customer-managed keys (CMK) are encryption keys that you create and manage in your own key store. By using CMK, you have more flexibility and control over the encryption and access of your organization’s data. You use [Azure Key Vault](/azure/key-vault/) to create and manage CMK, and then use the keys to encrypt the data stored by the FHIR® service.
1515

1616
## Rotate keys often
1717

@@ -21,7 +21,7 @@ To rotate the key by generating a new version of the key, use the 'az keyvault k
2121

2222
## Update the FHIR service after changing a managed identity
2323

24-
If you change the managed identity in any way, such as moving your FHIR service to a different tenant or subscription, the FHIR service isn't able to access your keys until you update the service manually with an ARM template deployment. For steps, see [Use an ARM template to update the encryption key](configure-customer-managed-keys.md#update-the-key-by-using-an-arm-template).
24+
If you change the managed identity in any way, such as moving your FHIR service to a different tenant or subscription, the FHIR service isn't able to access your keys. You must update the service manually with an ARM template deployment. For steps, see [Use an ARM template to update the encryption key](configure-customer-managed-keys.md#update-the-key-by-using-an-arm-template).
2525

2626
## Disable public access with a firewall
2727

articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -12,24 +12,22 @@ ms.date: 06/06/2022
1212

1313
# Tutorial for Da Vinci Drug Formulary
1414

15-
In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
15+
In this tutorial, we'll walk through setting up the FHIR® service in Azure Health Data Services (FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
1616

1717
## Touchstone capability statement
1818

19-
The first test that we'll focus on is testing FHIR service against the [Da Vinci Drug Formulary capability
20-
statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to
21-
missing search parameters and missing profiles.
19+
The first test focuses on testing the FHIR service against the [Da Vinci Drug Formulary capability
20+
statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test fails due to missing search parameters and missing profiles.
2221

2322
### Define search parameters
2423

25-
As part of the Da Vinci Drug Formulary IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the FormularyDrug resource. All three of these are tested in the
26-
capability statement.
24+
As part of the Da Vinci Drug Formulary IG, you'll need to define three [new search parameters](how-to-do-custom-search.md) for the FormularyDrug resource. All three of these are tested in the capability statement.
2725

2826
* [DrugTier](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugTier.json.html)
2927
* [DrugPlan](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugPlan.json.html)
3028
* [DrugName](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugName.json.html)
3129

32-
The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without any more updates.
30+
The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without updates.
3331

3432
### Store profiles
3533

@@ -40,13 +38,13 @@ Outside of defining search parameters, the only other update you need to make to
4038

4139
### Sample rest file
4240

43-
To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run:
41+
To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps previously outlined in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run:
4442

4543
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-test-script-execution.png" alt-text="Da Vinci test script execution.":::
4644

4745
## Touchstone query test
4846

49-
The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
47+
The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG, which you can use to create the resources and test against.
5048

5149
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-test-execution-results.png" alt-text="Da Vinci test execution results.":::
5250

@@ -57,4 +55,4 @@ In this tutorial, we walked through how to pass the Da Vinci Payer Data Exchange
5755
>[!div class="nextstepaction"]
5856
>[Da Vinci PDex](davinci-pdex-tutorial.md)
5957
60-
FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
58+
[!INCLUDE [FHIR trademark statement](../includes/healthcare-apis-fhir-trademark.md)]

0 commit comments

Comments
 (0)