Skip to content

Commit d37991c

Browse files
Merge pull request #92567 from MicrosoftDocs/master
Merge master to live, 3 AM
2 parents 1bd2207 + c8ee07f commit d37991c

File tree

79 files changed

+683
-284
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

79 files changed

+683
-284
lines changed

articles/app-service/manage-custom-dns-migrate-domain.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,15 @@ description: Learn how to migrate a custom DNS domain name that is already assig
44
services: app-service
55
documentationcenter: ''
66
author: cephalin
7-
manager: erikre
8-
editor: jimbe
7+
manager: gwallace
98
tags: top-support-issue
109

1110
ms.assetid: 10da5b8a-1823-41a3-a2ff-a0717c2b5c2d
1211
ms.service: app-service
1312
ms.workload: na
1413
ms.tgt_pltfrm: na
1514
ms.topic: article
16-
ms.date: 06/28/2017
15+
ms.date: 10/21/2019
1716
ms.author: cephalin
1817
ms.custom: seodec18
1918

@@ -127,6 +126,12 @@ Save your settings.
127126

128127
DNS queries should start resolving to your App Service app immediately after DNS propagation happens.
129128

129+
## Active domain in Azure
130+
131+
You can migrate an active custom domain in Azure, between subscriptions or within the same subscription. However, such a migration without downtime requires the source app and the target app are assigned the same custom domain at a certain time. Therefore, you need to make sure that the two apps are not deployed to the same deployment unit (internally known as a webspace). A domain name can be assigned to only one app in each deployment unit.
132+
133+
You can find the deployment unit for your app by looking at the domain name of the FTP/S URL `<deployment-unit>.ftp.azurewebsites.windows.net`. Check and make sure the deployment unit is different between the source app and the target app. The deployment unit of an app is determined by the [App Service plan](overview-hosting-plans.md) it's in. It's selected randomly by Azure when you create the plan and can't be changed. Azure only makes sure two plans are in the same deployment unit when you [create them in the same resource group *and* the same region](app-service-plan-manage.md#create-an-app-service-plan), but it doesn't have any logic to make sure plans are in different deployment units. The only way for you to create a plan in a different deployment unit is to keep creating a plan in a new resource group or region until you get a different deployment unit.
134+
130135
## Next steps
131136

132137
Learn how to bind a custom SSL certificate to App Service.

articles/connectors/connectors-create-api-servicebus.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Send and receive messages with Azure Service Bus - Azure Logic Apps
3-
description: Set up enterprise cloud messaging by using Azure Service Bus and Azure Logic Apps
2+
title: Exchange messages with Azure Service Bus - Azure Logic Apps
3+
description: Send and receive messages by using Azure Service Bus in Azure Logic Apps
44
services: logic-apps
55
ms.service: logic-apps
66
ms.suite: integration
@@ -9,11 +9,10 @@ ms.author: estfan
99
ms.reviewer: klam, LADocs
1010
ms.topic: conceptual
1111
ms.date: 09/19/2019
12-
ms.assetid: d6d14f5f-2126-4e33-808e-41de08e6721f
1312
tags: connectors
1413
---
1514

16-
# Exchange messages in the cloud by using Azure Logic Apps with Azure Service Bus
15+
# Exchange messages in the cloud by using Azure Logic Apps and Azure Service Bus
1716

1817
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) connector, you can create automated tasks and workflows that transfer data, such as sales and purchase orders, journals, and inventory movements across applications for your organization. The connector not only monitors, sends, and manages messages, but also performs actions with queues, sessions, topics, subscriptions, and so on, for example:
1918

articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Integrate security operations with Microsoft Graph Security - Azure Logic Apps
3-
description: Improve your app's threat protection, detection, and response capabilities by managing security operations with Microsoft Graph Security & Azure Logic Apps
2+
title: Integrate and manage security operations - Azure Logic Apps & Microsoft Graph Security
3+
description: Improve your app's threat protection, detection, and response with Microsoft Graph Security & Azure Logic Apps
44
services: logic-apps
55
ms.service: logic-apps
66
ms.suite: integration

articles/connectors/connectors-native-recurrence.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Schedule recurring tasks with Recurrence trigger - Azure Logic Apps
2+
title: Schedule recurring tasks and workflows - Azure Logic Apps
33
description: Schedule and run recurring automated tasks and workflows with the Recurrence trigger in Azure Logic Apps
44
services: logic-apps
55
ms.service: logic-apps

articles/connectors/connectors-native-sliding-window.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Schedule recurring tasks with Sliding Window trigger - Azure Logic Apps
3-
description: Schedule and run recurring automated tasks and workflows with the Sliding Window trigger in Azure Logic Apps
2+
title: Schedule tasks to handle contiguous data - Azure Logic Apps
3+
description: Create and run recurring tasks that handle contiguous data by using sliding windows in Azure Logic Apps
44
services: logic-apps
55
ms.service: logic-apps
66
ms.suite: integration
@@ -11,9 +11,9 @@ ms.topic: conceptual
1111
ms.date: 05/25/2019
1212
---
1313

14-
# Create, schedule, and run recurring tasks and workflows with the Sliding Window trigger in Azure Logic Apps
14+
# Schedule and run tasks for contiguous data by using the Sliding Window trigger in Azure Logic Apps
1515

16-
To regularly run tasks, processes, or jobs that must handle data in continuous chunks, you can start your logic app workflow with the **Sliding Window - Schedule** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for whatever reason, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
16+
To regularly run tasks, processes, or jobs that must handle data in contiguous chunks, you can start your logic app workflow with the **Sliding Window** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for whatever reason, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
1717

1818
Here are some patterns that this trigger supports:
1919

articles/data-factory/concepts-data-flow-overview.md

Lines changed: 36 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -11,101 +11,103 @@ ms.date: 10/7/2019
1111

1212
# What are mapping data flows?
1313

14-
Mapping Data Flows are visually designed data transformations in Azure Data Factory. Data flows allow data engineers to develop graphical data transformation logic without writing code. The resulting data flows are executed as activities within Azure Data Factory Pipelines using scaled-out Spark clusters. Data flow activities can be operationalized via existing Data Factory scheduling, control, flow and monitoring capabilities.
14+
Mapping data flows are visually designed data transformations in Azure Data Factory. Data flows allow data engineers to develop graphical data transformation logic without writing code. The resulting data flows are executed as activities within Azure Data Factory pipelines that use scaled-out Spark clusters. Data flow activities can be operationalized via existing Data Factory scheduling, control, flow, and monitoring capabilities.
1515

16-
Mapping Data Flows provide a fully visual experience with no coding required. Your data flows will execute on your own execution cluster for scaled-out data processing. Azure Data Factory handles all of the code translation, path optimization, and execution of your data flow jobs.
16+
Mapping data flows provide a fully visual experience with no coding required. Your data flows will run on your own execution cluster for scaled-out data processing. Azure Data Factory handles all the code translation, path optimization, and execution of your data flow jobs.
1717

1818
## Getting started
1919

20-
To create a data flow, click the plus sign in under Factory Resources.
20+
To create a data flow, select the plus sign under **Factory Resources**, and then select **Data Flow**.
2121

22-
![new data flow](media/data-flow/newdataflow2.png "new data flow")
22+
![New data flow](media/data-flow/newdataflow2.png "new data flow")
2323

24-
This takes you to the data flow canvas where you can create your transformation logic. Click the 'Add source' box to start configuring your Source transformation. For more information, see [Source Transformation](data-flow-source.md).
24+
This takes you to the data flow canvas where you can create your transformation logic. Select **Add source** to start configuring your source transformation. For more information, see [Source transformation](data-flow-source.md).
2525

2626
## Data flow canvas
2727

28-
The Data Flow canvas is separated into three parts: the top bar, the graph, and the configuration panel.
28+
The data flow canvas is separated into three parts: the top bar, the graph, and the configuration panel.
2929

3030
![Canvas](media/data-flow/canvas1.png "Canvas")
3131

3232
### Graph
3333

34-
The graph displays the transformation stream. It shows the lineage of source data as it flows into one or more sinks. To add a new source, click the 'Add source' box. To add a new transformation, click on the plus sign on the bottom right of an existing transformation.
34+
The graph displays the transformation stream. It shows the lineage of source data as it flows into one or more sinks. To add a new source, select **Add source**. To add a new transformation, select the plus sign on the lower right of an existing transformation.
3535

3636
![Canvas](media/data-flow/canvas2.png "Canvas")
3737

3838
### Configuration panel
3939

40-
The configuration panel shows the settings specific to the currently selected transformation or, if no transformation is selected, the data flow. In the overall data flow configuration, you can edit the name and description under the **General** tab or add parameters via the **Parameters** tab. For more information, see [Mapping Data Flow parameters](parameters-data-flow.md).
40+
The configuration panel shows the settings specific to the currently selected transformation. If no transformation is selected, it shows the data flow. In the overall data flow configuration, you can edit the name and description under the **General** tab or add parameters via the **Parameters** tab. For more information, see [Mapping data flow parameters](parameters-data-flow.md).
4141

42-
Each transformation has at least four configuration tabs:
42+
Each transformation has at least four configuration tabs.
4343

4444
#### Transformation settings
4545

46-
The first tab in each transformation's configuration pane contains the settings specific to that transformation. For more information, please refer to that transformation's documentation page.
46+
The first tab in each transformation's configuration pane contains the settings specific to that transformation. For more information, see that transformation's documentation page.
4747

4848
![Source settings tab](media/data-flow/source1.png "Source settings tab")
4949

5050
#### Optimize
5151

52-
The _Optimize_ tab contains settings to configure partitioning schemes.
52+
The **Optimize** tab contains settings to configure partitioning schemes.
5353

5454
![Optimize](media/data-flow/optimize1.png "Optimize")
5555

56-
The default setting is "Use current partitioning," which instructs Azure Data Factory to use the partitioning scheme native to Data Flows running on Spark. In most scenarios, this setting is the recommended approach.
56+
The default setting is **Use current partitioning**, which instructs Azure Data Factory to use the partitioning scheme native to data flows running on Spark. In most scenarios, we recommend this setting.
5757

58-
There are instances where you may wish to adjust the partitioning. For instance, if you want to output your transformations to a single file in the lake, choose "Single partition" in a Sink transformation.
58+
There are instances where you might want to adjust the partitioning. For instance, if you want to output your transformations to a single file in the lake, select **Single partition** in a sink transformation.
5959

60-
Another case where you may wish to control the partitioning schemes is optimizing performance. Adjusting the partitioning provides control over the distribution of your data across compute nodes and data locality optimizations that can have both positive and negative effects on your overall data flow performance. For more information, see the [data Flow performance guide](concepts-data-flow-performance.md).
60+
Another case where you might want to control the partitioning schemes is optimizing performance. Adjusting the partitioning provides control over the distribution of your data across compute nodes and data locality optimizations that can have both positive and negative effects on your overall data flow performance. For more information, see the [Data flow performance guide](concepts-data-flow-performance.md).
6161

62-
To change the partitioning on any transformation, click the Optimize tab and select the "Set partitioning" radio button. You'll then be presented with a series of options for partitioning. The best method of partitioning will differ based on your data volumes, candidate keys, null values, and cardinality. A best practice is to start with default partitioning and then try different partitioning options. You can test using pipeline debug runs and view execution time and partition usage in each transformation grouping from the Monitoring view. For more information, see [monitoring data flows](concepts-data-flow-monitoring.md).
62+
To change the partitioning on any transformation, select the **Optimize** tab and select the **Set Partitioning** radio button. You'll then be presented with a series of options for partitioning. The best method of partitioning will differ based on your data volumes, candidate keys, null values, and cardinality.
6363

64-
Below are the available partitioning options.
64+
A best practice is to start with default partitioning and then try different partitioning options. You can test by using pipeline debug runs, and view execution time and partition usage in each transformation grouping from the monitoring view. For more information, see [Monitoring data flows](concepts-data-flow-monitoring.md).
6565

66-
##### Round Robin
66+
The following partitioning options are available.
6767

68-
Round Robin is simple partition that automatically distributes data equally across partitions. Use Round Robin when you don't have good key candidates to implement a solid, smart partitioning strategy. You can set the number of physical partitions.
68+
##### Round robin
69+
70+
Round robin is a simple partition that automatically distributes data equally across partitions. Use round robin when you don't have good key candidates to implement a solid, smart partitioning strategy. You can set the number of physical partitions.
6971

7072
##### Hash
7173

72-
Azure Data Factory will produce a hash of columns to produce uniform partitions such that rows with similar values will fall in the same partition. When using the Hash option, test for possible partition skew. You can set the number of physical partitions.
74+
Azure Data Factory will produce a hash of columns to produce uniform partitions such that rows with similar values will fall in the same partition. When you use the Hash option, test for possible partition skew. You can set the number of physical partitions.
7375

74-
##### Dynamic Range
76+
##### Dynamic range
7577

76-
Dynamic Range will use Spark dynamic ranges based on the columns or expressions that you provide. You can set the number of physical partitions.
78+
Dynamic range will use Spark dynamic ranges based on the columns or expressions that you provide. You can set the number of physical partitions.
7779

78-
##### Fixed Range
80+
##### Fixed range
7981

80-
Build an expression that provides a fixed range for values within your partitioned data columns. You should have a good understanding of your data before using this option to avoid partition skew. The values you enter for the expression will be used as part of a partition function. You can set the number of physical partitions.
82+
Build an expression that provides a fixed range for values within your partitioned data columns. To avoid partition skew, you should have a good understanding of your data before you use this option. The values you enter for the expression will be used as part of a partition function. You can set the number of physical partitions.
8183

8284
##### Key
8385

84-
If you have a good understanding of the cardinality of your data, key partitioning may be a good partition strategy. Key partitioning will create partitions for each unique value in your column. You can't set the number of partitions because the number will be based on unique values in the data.
86+
If you have a good understanding of the cardinality of your data, key partitioning might be a good strategy. Key partitioning will create partitions for each unique value in your column. You can't set the number of partitions because the number will be based on unique values in the data.
8587

8688
#### Inspect
8789

88-
The _Inspect_ tab provides a view into the metadata of the data stream that you're transforming. You can see the column counts, columns changed, columns added, data types, column ordering, and column references. Inspect is a read-only view of your metadata. You don't need to have Debug mode enabled to see metadata in the Inspect Pane.
90+
The **Inspect** tab provides a view into the metadata of the data stream that you're transforming. You can see the column counts, columns changed, columns added, data types, column ordering, and column references. **Inspect** is a read-only view of your metadata. You don't need to have debug mode enabled to see metadata in the **Inspect** pane.
8991

9092
![Inspect](media/data-flow/inspect1.png "Inspect")
9193

92-
As you change the shape of your data through transformations, you'll see the metadata changes flow through the Inspect Pane. If there isn't a defined schema in your Source transformation, then metadata won't be visible in the Inspect Pane. Lack of metadata is common in Schema Drift scenarios.
94+
As you change the shape of your data through transformations, you'll see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata won't be visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
9395

94-
#### Data Preview
96+
#### Data preview
9597

96-
If debug mode is on, the _Data Preview_ tab gives you an interactive snapshot of the data at each transform. For more information, see [data preview in debug mode](concepts-data-flow-debug-mode.md#data-preview).
98+
If debug mode is on, the **Data Preview** tab gives you an interactive snapshot of the data at each transform. For more information, see [Data preview in debug mode](concepts-data-flow-debug-mode.md#data-preview).
9799

98100
### Top bar
99101

100-
The top bar contains actions that affect the whole data flow such as saving and validation. You can also toggle between graph and configuration modes using the **Show graph** and **Hide graph** buttons.
102+
The top bar contains actions that affect the whole data flow, like saving and validation. You can also toggle between graph and configuration modes by using the **Show Graph** and **Hide Graph** buttons.
101103

102-
![Hide graph](media/data-flow/hideg.png "Hide Graph")
104+
![Hide graph](media/data-flow/hideg.png "Hide graph")
103105

104-
If you hide your graph, you can navigate through your transformation nodes laterally via the **previous** and **next** buttons.
106+
If you hide your graph, you can browse through your transformation nodes laterally via the **Previous** and **Next** buttons.
105107

106-
![Navigate](media/data-flow/showhide.png "navigate")
108+
![Previous and next buttons](media/data-flow/showhide.png "previous and next buttons")
107109

108110
## Next steps
109111

110-
* Learn how to create a [Source Transformation](data-flow-source.md)
111-
* Learn how to build your data flows in [Debug mode](concepts-data-flow-debug-mode.md)
112+
* Learn how to create a [source transformation](data-flow-source.md).
113+
* Learn how to build your data flows in [debug mode](concepts-data-flow-debug-mode.md).

0 commit comments

Comments
 (0)