You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/app-service/manage-custom-dns-migrate-domain.md
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,16 +4,15 @@ description: Learn how to migrate a custom DNS domain name that is already assig
4
4
services: app-service
5
5
documentationcenter: ''
6
6
author: cephalin
7
-
manager: erikre
8
-
editor: jimbe
7
+
manager: gwallace
9
8
tags: top-support-issue
10
9
11
10
ms.assetid: 10da5b8a-1823-41a3-a2ff-a0717c2b5c2d
12
11
ms.service: app-service
13
12
ms.workload: na
14
13
ms.tgt_pltfrm: na
15
14
ms.topic: article
16
-
ms.date: 06/28/2017
15
+
ms.date: 10/21/2019
17
16
ms.author: cephalin
18
17
ms.custom: seodec18
19
18
@@ -127,6 +126,12 @@ Save your settings.
127
126
128
127
DNS queries should start resolving to your App Service app immediately after DNS propagation happens.
129
128
129
+
## Active domain in Azure
130
+
131
+
You can migrate an active custom domain in Azure, between subscriptions or within the same subscription. However, such a migration without downtime requires the source app and the target app are assigned the same custom domain at a certain time. Therefore, you need to make sure that the two apps are not deployed to the same deployment unit (internally known as a webspace). A domain name can be assigned to only one app in each deployment unit.
132
+
133
+
You can find the deployment unit for your app by looking at the domain name of the FTP/S URL `<deployment-unit>.ftp.azurewebsites.windows.net`. Check and make sure the deployment unit is different between the source app and the target app. The deployment unit of an app is determined by the [App Service plan](overview-hosting-plans.md) it's in. It's selected randomly by Azure when you create the plan and can't be changed. Azure only makes sure two plans are in the same deployment unit when you [create them in the same resource group *and* the same region](app-service-plan-manage.md#create-an-app-service-plan), but it doesn't have any logic to make sure plans are in different deployment units. The only way for you to create a plan in a different deployment unit is to keep creating a plan in a new resource group or region until you get a different deployment unit.
134
+
130
135
## Next steps
131
136
132
137
Learn how to bind a custom SSL certificate to App Service.
Copy file name to clipboardExpand all lines: articles/connectors/connectors-create-api-servicebus.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
-
title: Send and receive messages with Azure Service Bus - Azure Logic Apps
3
-
description: Set up enterprise cloud messaging by using Azure Service Bus and Azure Logic Apps
2
+
title: Exchange messages with Azure Service Bus - Azure Logic Apps
3
+
description: Send and receive messages by using Azure Service Bus in Azure Logic Apps
4
4
services: logic-apps
5
5
ms.service: logic-apps
6
6
ms.suite: integration
@@ -9,11 +9,10 @@ ms.author: estfan
9
9
ms.reviewer: klam, LADocs
10
10
ms.topic: conceptual
11
11
ms.date: 09/19/2019
12
-
ms.assetid: d6d14f5f-2126-4e33-808e-41de08e6721f
13
12
tags: connectors
14
13
---
15
14
16
-
# Exchange messages in the cloud by using Azure Logic Apps with Azure Service Bus
15
+
# Exchange messages in the cloud by using Azure Logic Apps and Azure Service Bus
17
16
18
17
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) connector, you can create automated tasks and workflows that transfer data, such as sales and purchase orders, journals, and inventory movements across applications for your organization. The connector not only monitors, sends, and manages messages, but also performs actions with queues, sessions, topics, subscriptions, and so on, for example:
Copy file name to clipboardExpand all lines: articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
-
title: Integrate security operations with Microsoft Graph Security - Azure Logic Apps
3
-
description: Improve your app's threat protection, detection, and response capabilities by managing security operations with Microsoft Graph Security & Azure Logic Apps
2
+
title: Integrate and manage security operations - Azure Logic Apps & Microsoft Graph Security
3
+
description: Improve your app's threat protection, detection, and response with Microsoft Graph Security & Azure Logic Apps
description: Schedule and run recurring automated tasks and workflows with the Sliding Window trigger in Azure Logic Apps
2
+
title: Schedule tasks to handle contiguous data - Azure Logic Apps
3
+
description: Create and run recurring tasks that handle contiguous data by using sliding windows in Azure Logic Apps
4
4
services: logic-apps
5
5
ms.service: logic-apps
6
6
ms.suite: integration
@@ -11,9 +11,9 @@ ms.topic: conceptual
11
11
ms.date: 05/25/2019
12
12
---
13
13
14
-
# Create, schedule, and run recurring tasks and workflows with the Sliding Window trigger in Azure Logic Apps
14
+
# Schedule and run tasks for contiguous data by using the Sliding Window trigger in Azure Logic Apps
15
15
16
-
To regularly run tasks, processes, or jobs that must handle data in continuous chunks, you can start your logic app workflow with the **Sliding Window - Schedule** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for whatever reason, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
16
+
To regularly run tasks, processes, or jobs that must handle data in contiguous chunks, you can start your logic app workflow with the **Sliding Window** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for whatever reason, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
17
17
18
18
Here are some patterns that this trigger supports:
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-data-flow-overview.md
+36-34Lines changed: 36 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,101 +11,103 @@ ms.date: 10/7/2019
11
11
12
12
# What are mapping data flows?
13
13
14
-
Mapping Data Flows are visually designed data transformations in Azure Data Factory. Data flows allow data engineers to develop graphical data transformation logic without writing code. The resulting data flows are executed as activities within Azure Data Factory Pipelines using scaled-out Spark clusters. Data flow activities can be operationalized via existing Data Factory scheduling, control, flow and monitoring capabilities.
14
+
Mapping data flows are visually designed data transformations in Azure Data Factory. Data flows allow data engineers to develop graphical data transformation logic without writing code. The resulting data flows are executed as activities within Azure Data Factory pipelines that use scaled-out Spark clusters. Data flow activities can be operationalized via existing Data Factory scheduling, control, flow, and monitoring capabilities.
15
15
16
-
Mapping Data Flows provide a fully visual experience with no coding required. Your data flows will execute on your own execution cluster for scaled-out data processing. Azure Data Factory handles all of the code translation, path optimization, and execution of your data flow jobs.
16
+
Mapping data flows provide a fully visual experience with no coding required. Your data flows will run on your own execution cluster for scaled-out data processing. Azure Data Factory handles all the code translation, path optimization, and execution of your data flow jobs.
17
17
18
18
## Getting started
19
19
20
-
To create a data flow, click the plus sign in under Factory Resources.
20
+
To create a data flow, select the plus sign under **Factory Resources**, and then select **Data Flow**.
21
21
22
-

22
+

23
23
24
-
This takes you to the data flow canvas where you can create your transformation logic. Click the 'Add source' box to start configuring your Source transformation. For more information, see [Source Transformation](data-flow-source.md).
24
+
This takes you to the data flow canvas where you can create your transformation logic. Select **Add source**to start configuring your source transformation. For more information, see [Source transformation](data-flow-source.md).
25
25
26
26
## Data flow canvas
27
27
28
-
The Data Flow canvas is separated into three parts: the top bar, the graph, and the configuration panel.
28
+
The data flow canvas is separated into three parts: the top bar, the graph, and the configuration panel.
29
29
30
30

31
31
32
32
### Graph
33
33
34
-
The graph displays the transformation stream. It shows the lineage of source data as it flows into one or more sinks. To add a new source, click the 'Add source' box. To add a new transformation, click on the plus sign on the bottom right of an existing transformation.
34
+
The graph displays the transformation stream. It shows the lineage of source data as it flows into one or more sinks. To add a new source, select **Add source**. To add a new transformation, select the plus sign on the lower right of an existing transformation.
35
35
36
36

37
37
38
38
### Configuration panel
39
39
40
-
The configuration panel shows the settings specific to the currently selected transformation or, if no transformation is selected, the data flow. In the overall data flow configuration, you can edit the name and description under the **General** tab or add parameters via the **Parameters** tab. For more information, see [Mapping Data Flow parameters](parameters-data-flow.md).
40
+
The configuration panel shows the settings specific to the currently selected transformation. If no transformation is selected, it shows the data flow. In the overall data flow configuration, you can edit the name and description under the **General** tab or add parameters via the **Parameters** tab. For more information, see [Mapping data flow parameters](parameters-data-flow.md).
41
41
42
-
Each transformation has at least four configuration tabs:
42
+
Each transformation has at least four configuration tabs.
43
43
44
44
#### Transformation settings
45
45
46
-
The first tab in each transformation's configuration pane contains the settings specific to that transformation. For more information, please refer to that transformation's documentation page.
46
+
The first tab in each transformation's configuration pane contains the settings specific to that transformation. For more information, see that transformation's documentation page.
The default setting is "Use current partitioning," which instructs Azure Data Factory to use the partitioning scheme native to Data Flows running on Spark. In most scenarios, this setting is the recommended approach.
56
+
The default setting is **Use current partitioning**, which instructs Azure Data Factory to use the partitioning scheme native to data flows running on Spark. In most scenarios, we recommend this setting.
57
57
58
-
There are instances where you may wish to adjust the partitioning. For instance, if you want to output your transformations to a single file in the lake, choose "Single partition" in a Sink transformation.
58
+
There are instances where you might want to adjust the partitioning. For instance, if you want to output your transformations to a single file in the lake, select **Single partition** in a sink transformation.
59
59
60
-
Another case where you may wish to control the partitioning schemes is optimizing performance. Adjusting the partitioning provides control over the distribution of your data across compute nodes and data locality optimizations that can have both positive and negative effects on your overall data flow performance. For more information, see the [data Flow performance guide](concepts-data-flow-performance.md).
60
+
Another case where you might want to control the partitioning schemes is optimizing performance. Adjusting the partitioning provides control over the distribution of your data across compute nodes and data locality optimizations that can have both positive and negative effects on your overall data flow performance. For more information, see the [Data flow performance guide](concepts-data-flow-performance.md).
61
61
62
-
To change the partitioning on any transformation, click the Optimize tab and select the "Set partitioning" radio button. You'll then be presented with a series of options for partitioning. The best method of partitioning will differ based on your data volumes, candidate keys, null values, and cardinality. A best practice is to start with default partitioning and then try different partitioning options. You can test using pipeline debug runs and view execution time and partition usage in each transformation grouping from the Monitoring view. For more information, see [monitoring data flows](concepts-data-flow-monitoring.md).
62
+
To change the partitioning on any transformation, select the **Optimize** tab and select the **Set Partitioning** radio button. You'll then be presented with a series of options for partitioning. The best method of partitioning will differ based on your data volumes, candidate keys, null values, and cardinality.
63
63
64
-
Below are the available partitioning options.
64
+
A best practice is to start with default partitioning and then try different partitioning options. You can test by using pipeline debug runs, and view execution time and partition usage in each transformation grouping from the monitoring view. For more information, see [Monitoring data flows](concepts-data-flow-monitoring.md).
65
65
66
-
##### Round Robin
66
+
The following partitioning options are available.
67
67
68
-
Round Robin is simple partition that automatically distributes data equally across partitions. Use Round Robin when you don't have good key candidates to implement a solid, smart partitioning strategy. You can set the number of physical partitions.
68
+
##### Round robin
69
+
70
+
Round robin is a simple partition that automatically distributes data equally across partitions. Use round robin when you don't have good key candidates to implement a solid, smart partitioning strategy. You can set the number of physical partitions.
69
71
70
72
##### Hash
71
73
72
-
Azure Data Factory will produce a hash of columns to produce uniform partitions such that rows with similar values will fall in the same partition. When using the Hash option, test for possible partition skew. You can set the number of physical partitions.
74
+
Azure Data Factory will produce a hash of columns to produce uniform partitions such that rows with similar values will fall in the same partition. When you use the Hash option, test for possible partition skew. You can set the number of physical partitions.
73
75
74
-
##### Dynamic Range
76
+
##### Dynamic range
75
77
76
-
Dynamic Range will use Spark dynamic ranges based on the columns or expressions that you provide. You can set the number of physical partitions.
78
+
Dynamic range will use Spark dynamic ranges based on the columns or expressions that you provide. You can set the number of physical partitions.
77
79
78
-
##### Fixed Range
80
+
##### Fixed range
79
81
80
-
Build an expression that provides a fixed range for values within your partitioned data columns. You should have a good understanding of your data before using this option to avoid partition skew. The values you enter for the expression will be used as part of a partition function. You can set the number of physical partitions.
82
+
Build an expression that provides a fixed range for values within your partitioned data columns. To avoid partition skew, you should have a good understanding of your data before you use this option. The values you enter for the expression will be used as part of a partition function. You can set the number of physical partitions.
81
83
82
84
##### Key
83
85
84
-
If you have a good understanding of the cardinality of your data, key partitioning may be a good partition strategy. Key partitioning will create partitions for each unique value in your column. You can't set the number of partitions because the number will be based on unique values in the data.
86
+
If you have a good understanding of the cardinality of your data, key partitioning might be a good strategy. Key partitioning will create partitions for each unique value in your column. You can't set the number of partitions because the number will be based on unique values in the data.
85
87
86
88
#### Inspect
87
89
88
-
The _Inspect_ tab provides a view into the metadata of the data stream that you're transforming. You can see the column counts, columns changed, columns added, data types, column ordering, and column references. Inspect is a read-only view of your metadata. You don't need to have Debug mode enabled to see metadata in the Inspect Pane.
90
+
The **Inspect** tab provides a view into the metadata of the data stream that you're transforming. You can see the column counts, columns changed, columns added, data types, column ordering, and column references. **Inspect** is a read-only view of your metadata. You don't need to have debug mode enabled to see metadata in the **Inspect** pane.
89
91
90
92

91
93
92
-
As you change the shape of your data through transformations, you'll see the metadata changes flow through the Inspect Pane. If there isn't a defined schema in your Source transformation, then metadata won't be visible in the Inspect Pane. Lack of metadata is common in Schema Drift scenarios.
94
+
As you change the shape of your data through transformations, you'll see the metadata changes flow in the **Inspect** pane. If there isn't a defined schema in your source transformation, then metadata won't be visible in the **Inspect** pane. Lack of metadata is common in schema drift scenarios.
93
95
94
-
#### Data Preview
96
+
#### Data preview
95
97
96
-
If debug mode is on, the _Data Preview_ tab gives you an interactive snapshot of the data at each transform. For more information, see [data preview in debug mode](concepts-data-flow-debug-mode.md#data-preview).
98
+
If debug mode is on, the **Data Preview** tab gives you an interactive snapshot of the data at each transform. For more information, see [Data preview in debug mode](concepts-data-flow-debug-mode.md#data-preview).
97
99
98
100
### Top bar
99
101
100
-
The top bar contains actions that affect the whole data flow such as saving and validation. You can also toggle between graph and configuration modes using the **Show graph** and **Hide graph** buttons.
102
+
The top bar contains actions that affect the whole data flow, like saving and validation. You can also toggle between graph and configuration modes by using the **Show Graph** and **Hide Graph** buttons.
0 commit comments