Skip to content

Commit 66d919e

Browse files
author
gitName
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into limitcon
2 parents 34ba90e + df16650 commit 66d919e

38 files changed

+2026
-571
lines changed

articles/azure-netapp-files/configure-network-features.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: how-to
8-
ms.date: 10/24/2024
8+
ms.date: 11/01/2024
99
ms.custom: references_regions
1010
ms.author: anfdocs
1111
---
@@ -81,7 +81,7 @@ You can edit the network features option of existing volumes from *Basic* to *St
8181

8282
* You should only use the edit network features option for an [application volume group for SAP HANA](application-volume-group-introduction.md) if you have enrolled in the [extension one preview](application-volume-group-introduction.md#extension-1-features), which adds support for Standard network features.
8383
* If you enabled both the `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs and are using 1 or 2-TiB capacity pools, see [Resize a capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) for information about sizing your capacity pools.
84-
* <a name="no-downtime"></a> Azure NetApp Files supports a non-disruptive upgrade to Standard network features and a revert to Basic network features. This operation is expected to take at least 25 minutes. You can't create a regular or data protection volume or application volume group while the edit network feature operation is underway. This feature is currently in **preview** in the Australia East, Central India, North Central US, and Switzerland North regions. In all other regions, updating network features can cause a network disruption on the volumes for up to 5 minutes.
84+
* <a name="no-downtime"></a> Azure NetApp Files supports a non-disruptive upgrade to Standard network features and a revert to Basic network features. This operation is expected to take at least 25 minutes. You can't create a regular or data protection volume or application volume group while the edit network feature operation is underway. This feature is currently in **preview** in the Australia East, Central India, East Asia, North Central US, and Switzerland North regions. In all other regions, updating network features can cause a network disruption on the volumes for up to 5 minutes.
8585

8686
> [!NOTE]
8787
> You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files standard networking features (edit volumes) Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The feature can take approximately one week to be enabled after you submit the waitlist request. You can check the status of feature registration by using the following command:

articles/azure-netapp-files/whats-new.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Azure NetApp Files is updated regularly. This article provides a summary about t
2020

2121
Azure NetApp Files now supports the ability to edit network features (that is, upgrade from Basic to Standard network features) with no downtime for Azure NetApp Files volumes. Standard Network Features provide you with an enhanced virtual networking experience for a seamless and consistent experience along with security posture for Azure NetApp Files.
2222

23-
This feature is currently in preview in the Australia East, Central India, North Central US, and Switzerland North regions.
23+
This feature is currently in preview in the Australia East, Central India, East Asia, North Central US, and Switzerland North regions.
2424

2525
## September 2024
2626

articles/frontdoor/index.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
### YamlMime:Landing
22

33
title: Azure Front Door and CDN documentation
4-
summary: Azure Front Door is a scalable and secure entry point for fast delivery of your global web applications.
4+
summary: Azure Front Door is a modern cloud content delivery network (CDN) service that delivers high performance, scalability, and secure user experiences for your content and applications.
55

66
metadata:
77
title: Azure Front Door and CDN Documentation
8-
description: Azure Front Door provides a scalable and secure entry point for fast delivery of your global web applications. Learn how to use Front Door with our quickstarts, tutorials, and samples.
8+
description: Azure Front Door is a modern cloud content delivery network (CDN) service that delivers high performance, scalability, and secure user experiences for your content and applications. Learn how to use Front Door with our quickstarts, tutorials, and samples.
99
ms.service: azure-frontdoor
1010
ms.topic: landing-page
1111
author: duongau
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
---
2+
title: Learn how to configure Azure Storage to de-identify documents with the de-identification service
3+
description: "Learn how to configure Azure Storage to de-identify documents with the de-identification service."
4+
author: jovinson-ms
5+
ms.author: jovinson
6+
ms.service: azure-health-data-services
7+
ms.subservice: deidentification-service
8+
ms.topic: tutorial
9+
ms.date: 11/01/2024
10+
11+
#customer intent: As an IT admin, I want to know how to configure an Azure Storage account to allow access to the de-identification service to de-identify documents.
12+
13+
---
14+
15+
# Tutorial: Configure Azure Storage to de-identify documents
16+
17+
The Azure Health Data Services de-identification service (preview) can de-identify documents in Azure Storage via an asynchronous job. If you have many documents that you would like
18+
to de-identify, using a job is a good option. Jobs also provide consistent surrogation, meaning that surrogate values in the de-identified output will match across
19+
all documents. For more information about de-identification, including consistent surrogation, see [What is the de-identification service (preview)?](overview.md)
20+
21+
When you choose to store documents in Azure Blob Storage, you're charged based on Azure Storage pricing. This cost isn't included in the
22+
de-identification service pricing. [Explore Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs).
23+
24+
In this tutorial, you:
25+
26+
> [!div class="checklist"]
27+
> * Create a storage account and container
28+
> * Upload a sample document
29+
> * Grant the de-identification service access
30+
> * Configure network isolation
31+
32+
## Prerequisites
33+
34+
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
35+
* A de-identification service with system-assigned managed identity. [Deploy the de-identification service (preview)](quickstart.md).
36+
37+
## Open Azure CLI
38+
39+
Install [Azure CLI](/cli/azure/install-azure-cli) and open your terminal of choice. In this tutorial, we're using PowerShell.
40+
41+
## Create a storage account and container
42+
1. Set your context, substituting the subscription name containing your de-identification service for the `<subscription_name>` placeholder:
43+
```powershell
44+
az account set --subscription "<subscription_name>"
45+
```
46+
1. Save a variable for the resource group, substituting the resource group containing your de-identification service for the `<resource_group>` placeholder:
47+
```powershell
48+
$ResourceGroup = "<resource_group>"
49+
```
50+
1. Create a storage account, providing a value for the `<storage_account_name>` placeholder:
51+
```powershell
52+
$StorageAccountName = "<storage_account_name>"
53+
$StorageAccountId = $(az storage account create --name $StorageAccountName --resource-group $ResourceGroup --sku Standard_LRS --kind StorageV2 --min-tls-version TLS1_2 --allow-blob-public-access false --query id --output tsv)
54+
```
55+
1. Assign yourself a role to perform data operations on the storage account:
56+
```powershell
57+
$UserId = $(az ad signed-in-user show --query id -o tsv)
58+
az role assignment create --role "Storage Blob Data Contributor" --assignee $UserId --scope $StorageAccountId
59+
```
60+
1. Create a container to hold your sample document:
61+
```powershell
62+
az storage container create --account-name $StorageAccountName --name deidtest --auth-mode login
63+
```
64+
## Upload a sample document
65+
Next, you upload a document that contains synthetic PHI:
66+
```powershell
67+
$DocumentContent = "The patient came in for a visit on 10/12/2023 and was seen again November 4th at Contoso Hospital."
68+
az storage blob upload --data $DocumentContent --account-name $StorageAccountName --container-name deidtest --name deidsample.txt --auth-mode login
69+
```
70+
71+
## Grant the de-identification service access to the storage account
72+
73+
In this step, you grant the de-identification service's system-assigned managed identity role-based access to the container. You grant the **Storage Blob
74+
Data Contributor** role because the de-identification service will both read the original document and write de-identified output documents. Substitute the name of
75+
your de-identification service for the `<deid_service_name>` placeholder:
76+
```powershell
77+
$DeidServicePrincipalId=$(az resource show -n <deid_service_name> -g $ResourceGroup --resource-type microsoft.healthdataaiservices/deidservices --query identity.principalId --output tsv)
78+
az role assignment create --assignee $DeidServicePrincipalId --role "Storage Blob Data Contributor" --scope $StorageAccountId
79+
```
80+
81+
## Configure network isolation on the storage account
82+
Next, you update the storage account to disable public network access and only allow access from trusted Azure services such as the de-identification service.
83+
After running this command, you won't be able to view the storage container contents without setting a network exception.
84+
Learn more at [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security).
85+
86+
```powershell
87+
az storage account update --name $StorageAccountName --public-network-access Disabled --bypass AzureServices
88+
```
89+
90+
## Clean up resources
91+
Once you're done with the storage account, you can delete the storage account and role assignments:
92+
```powershell
93+
az role assignment delete --assignee $DeidServicePrincipalId --role "Storage Blob Data Contributor" --scope $StorageAccountId
94+
az role assignment delete --assignee $UserId --role "Storage Blob Data Contributor" --scope $StorageAccountId
95+
az storage account delete --ids $StorageAccountId --yes
96+
```
97+
98+
## Next step
99+
100+
> [!div class="nextstepaction"]
101+
> [Quickstart: Azure Health De-identification client library for .NET](quickstart-sdk-net.md)

articles/healthcare-apis/deidentification/quickstart.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,4 +69,5 @@ If you no longer need them, delete the resource group and de-identification serv
6969

7070
## Related content
7171

72-
[De-identification service overview](overview.md)
72+
> [!div class="nextstepaction"]
73+
> [Tutorial: Configure Azure Storage to de-identify documents](configure-storage.md)

articles/healthcare-apis/deidentification/toc.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,11 @@ items:
1515
href: quickstart.md
1616
- name: Azure Health De-identification client library for .NET
1717
href: quickstart-sdk-net.md
18+
- name: Tutorials
19+
expanded: true
20+
items:
21+
- name: Configure Azure Storage to de-identify documents
22+
href: configure-storage.md
1823
- name: How-to
1924
expanded: true
2025
items:

articles/iot-operations/.openpublishing.redirection.iot-operations.json

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -202,13 +202,23 @@
202202
},
203203
{
204204
"source_path_from_root": "/articles/iot-operations/manage-mqtt-connectivity/howto-configure-tls-manual.md",
205-
"redirect_url": "/azure/iot-operations/manage-mqtt-broker/howto-configure-tls-manual",
206-
"redirect_document_id": true
205+
"redirect_url": "/azure/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener",
206+
"redirect_document_id": false
207+
},
208+
{
209+
"source_path_from_root": "/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-manual.md",
210+
"redirect_url": "/azure/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener",
211+
"redirect_document_id": false
207212
},
208213
{
209214
"source_path_from_root": "/articles/iot-operations/manage-mqtt-connectivity/howto-configure-tls-auto.md",
210-
"redirect_url": "/azure/iot-operations/manage-mqtt-broker/howto-configure-tls-auto",
211-
"redirect_document_id": true
215+
"redirect_url": "/azure/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener",
216+
"redirect_document_id": false
217+
},
218+
{
219+
"source_path_from_root": "/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-auto.md",
220+
"redirect_url": "/azure/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener",
221+
"redirect_document_id": false
212222
},
213223
{
214224
"source_path_from_root": "/articles/iot-operations/manage-mqtt-connectivity/howto-configure-brokerlistener.md",

articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: patricka
66
ms.service: azure-iot-operations
77
ms.subservice: azure-data-flows
88
ms.topic: how-to
9-
ms.date: 10/30/2024
9+
ms.date: 11/01/2024
1010

1111
#CustomerIntent: As an operator, I want to understand how to configure source and destination endpoints so that I can create a dataflow.
1212
---
@@ -27,6 +27,24 @@ Use the following table to choose the endpoint type to configure:
2727
| [Microsoft Fabric OneLake](howto-configure-fabric-endpoint.md) | For uploading data to Microsoft Fabric OneLake lakehouses. | No | Yes |
2828
| [Local storage](howto-configure-local-storage-endpoint.md) | For sending data to a locally available persistent volume, through which you can upload data via Azure Container Storage enabled by Azure Arc edge volumes. | No | Yes |
2929

30+
## Dataflows must use local MQTT broker endpoint
31+
32+
When you create a dataflow, you specify the source and destination endpoints. The dataflow moves data from the source endpoint to the destination endpoint. You can use the same endpoint for multiple dataflows, and you can use the same endpoint as both the source and destination in a dataflow.
33+
34+
However, using custom endpoints as both the source and destination in a dataflow isn't supported. This restriction means the built-in MQTT broker in Azure IoT Operations must be either the source or destination for every dataflow. To avoid dataflow deployment failures, use the [default MQTT dataflow endpoint](./howto-configure-mqtt-endpoint.md#default-endpoint) as either the source or destination for every dataflow.
35+
36+
The specific requirement is each dataflow must have either the source or destination configured with an MQTT endpoint that has the host `aio-broker`. So it's not strictly required to use the default endpoint, and you can create additional dataflow endpoints pointing to the local MQTT broker as long as the host is `aio-broker`. However, to avoid confusion and manageability issues, the default endpoint is the recommended approach.
37+
38+
The following table shows the supported scenarios:
39+
40+
| Scenario | Supported |
41+
|----------|-----------|
42+
| Default endpoint as source | Yes |
43+
| Default endpoint as destination | Yes |
44+
| Custom endpoint as source | Yes, if destination is default endpoint or an MQTT endpoint with host `aio-broker` |
45+
| Custom endpoint as destination | Yes, if source is default endpoint or an MQTT endpoint with host `aio-broker` |
46+
| Custom endpoint as source and destination | No, unless one of them is an MQTT endpoints with host `aio-broker` |
47+
3048
## Reuse endpoints
3149

3250
Think of each dataflow endpoint as a bundle of configuration settings that contains where the data should come from or go to (the `host` value), how to authenticate with the endpoint, and other settings like TLS configuration or batching preference. So you just need to create it once and then you can reuse it in multiple dataflows where these settings would be the same.

articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: patricka
66
ms.service: azure-iot-operations
77
ms.subservice: azure-data-flows
88
ms.topic: how-to
9-
ms.date: 10/30/2024
9+
ms.date: 11/01/2024
1010
ai-usage: ai-assisted
1111

1212
#CustomerIntent: As an operator, I want to understand how to understand how to configure dataflow endpoints for MQTT sources and destinations in Azure IoT Operations so that I can send data to and from MQTT brokers.
@@ -25,17 +25,24 @@ MQTT dataflow endpoints are used for MQTT sources and destinations. You can conf
2525

2626
## Azure IoT Operations local MQTT broker
2727

28+
Azure IoT Operations provides a [built-in local MQTT broker](../manage-mqtt-broker/overview-iot-mq.md) that you can use with dataflows. You can use the MQTT broker as a source to receive messages from other systems or as a destination to send messages to other systems.
29+
2830
### Default endpoint
2931

30-
Azure IoT Operations provides a built-in MQTT broker that you can use with dataflows. When you deploy Azure IoT Operations, an MQTT broker dataflow endpoint named "default" is created with default settings. You can use this endpoint as a source or destination for dataflows. The default endpoint uses the following settings:
32+
When you deploy Azure IoT Operations, an MQTT broker dataflow endpoint named "default" is created with default settings. You can use this endpoint as a source or destination for dataflows.
33+
34+
> [!IMPORTANT]
35+
> The default endpoint **must always be used as either the source or destination in every dataflow**. To learn more about, see [Dataflows must use local MQTT broker endpoint](./howto-configure-dataflow-endpoint.md#dataflows-must-use-local-mqtt-broker-endpoint).
36+
37+
The default endpoint uses the following settings:
3138

3239
- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)
3340
- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)
3441
- TLS: Enabled
3542
- Trusted CA certificate: The default CA certificate `azure-iot-operations-aio-ca-trust-bundle` from the [default root CA](../deploy-iot-ops/concept-default-root-ca.md)
3643

37-
> [!IMPORTANT]
38-
> If any of these default MQTT broker settings change, the dataflow endpoint must be updated to reflect the new settings. For example, if the default MQTT broker listener changes to use a different service name `my-mqtt-broker` and port 8885, you must update the endpoint to use the new host `host: my-mqtt-broker:8885`. Same applies to other settings like authentication and TLS.
44+
> [!CAUTION]
45+
> Don't delete the default endpoint. If you delete the default endpoint, you must recreate it with the same settings.
3946
4047
To view or edit the default MQTT broker endpoint settings:
4148

@@ -104,7 +111,7 @@ kubectl get dataflowendpoint default -n azure-iot-operations -o yaml
104111

105112
### Create new endpoint
106113

107-
You can also create new local MQTT broker endpoints with custom settings. For example, you can create a new MQTT broker endpoint using a different port, authentication, or other settings.
114+
You can also create new local MQTT broker endpoints with custom settings. For example, you can create a new MQTT broker endpoint using a different port, authentication, or authorization settings. However, you must still always use the default endpoint as either the source or destination in every dataflow, even if you create new endpoints.
108115

109116
# [Portal](#tab/portal)
110117

@@ -340,7 +347,7 @@ Then, follow the steps in [X.509 certificate](#x509-certificate) to configure th
340347

341348
### Event Grid shared subscription limitation
342349

343-
Azure Event Grid MQTT broker doesn't support shared subscriptions, which means that you can't set the `instanceCount` to more than `1` in the dataflow profile if Event Grid is used as a source (where the dataflow subscribes to messages) for a dataflow. In this case, if you set `instanceCount` greater than `1`, the dataflow fails to start.
350+
Azure Event Grid MQTT broker [doesn't support shared subscriptions](../../event-grid/mqtt-support.md#mqttv5-current-limitations), which means that you can't set the `instanceCount` to more than `1` in the dataflow profile if Event Grid is used as a source (where the dataflow subscribes to messages) for a dataflow. In this case, if you set `instanceCount` greater than `1`, the dataflow fails to start.
344351

345352
## Custom MQTT brokers
346353

0 commit comments

Comments
 (0)