Skip to content

Commit 30b27a1

Browse files
authored
Merge pull request #245736 from ShawnJackson/monitor-managed-virtual-network-integration-runtime
[AQ] edit pass: monitor-managed-virtual-network-integration-runtime
2 parents fc502bf + 1195f78 commit 30b27a1

File tree

1 file changed

+70
-42
lines changed

1 file changed

+70
-42
lines changed
Lines changed: 70 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Monitor managed virtual network integration runtime in Azure Data Factory
3-
description: Learn how to monitor managed virtual network integration runtime in Azure Data Factory.
2+
title: Monitor an integration runtime within a managed virtual network
3+
description: Learn how to monitor an integration runtime within an Azure Data Factory managed virtual network.
44
ms.service: data-factory
55
ms.subservice: monitoring
66
ms.topic: conceptual
@@ -10,68 +10,96 @@ ms.author: lle
1010
ms.custom:
1111
---
1212

13-
# Enhanced monitoring with Managed Virtual Network Integration Runtime
13+
# Monitor an integration runtime within a managed virtual network
14+
1415
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
15-
Azure Data Factory Managed Virtual Network is a feature that allows you to securely connect your data sources to a virtual network managed by Azure Data Factory service. By using this capability, you can establish a private and isolated environment for your data integration and orchestration processes. By using Azure Data Factory Managed Virtual Network, you can combine the power of Azure Data Factory's data integration and orchestration capabilities with the security and flexibility provided by Azure virtual networks. It empowers you to build robust, scalable, and secure data integration pipelines that seamlessly connect to your network resources, whether they're on-premises or in the cloud.
16-
One common pain point of managed compute is the lack of visibility into the performance and health especially within a managed virtual network environment. Without proper monitoring, identifying and resolving issues becomes challenging, leading to potential delays, errors, and performance degradation.
17-
By using our new enhanced monitoring feature, users can gain valuable insights into their data integration processes, leading to improved efficiency, better resource utilization, and enhanced overall performance. With proactive monitoring and timely alerts, users can proactively address issues, optimize workflows, and ensure the smooth execution of their data integration pipelines within the managed virtual network environment.
16+
17+
You can use an Azure Data Factory managed virtual network to securely connect your data sources to a virtual network that the Data Factory service manages. By using this capability, you can establish a private and isolated environment for your data integration and orchestration processes.
18+
19+
When you use a managed virtual network, you combine the data integration and orchestration capabilities in Data Factory with the security and flexibility of Azure virtual networks. It empowers you to build robust, scalable, and secure data integration pipelines that seamlessly connect to your network resources, whether they're on-premises or in the cloud.
20+
21+
One common problem of managed compute is the lack of visibility into performance and health, especially within a managed virtual network environment. Without proper monitoring, identifying and resolving problems becomes challenging and can lead to potential delays, errors, and performance degradation.
22+
23+
By using enhanced monitoring in Data Factory, you can gain valuable insights into your data integration processes. These insights can lead to improved efficiency, better resource utilization, and enhanced overall performance. With proactive monitoring and timely alerts, you can address issues, optimize workflows, and ensure the smooth execution of your data integration pipelines within the managed virtual network environment.
1824

1925
## New metrics
20-
The introduction of the new metrics in the Managed Virtual Network Integration Runtime feature significantly enhances the visibility and monitoring capabilities within virtual network environments. These new metrics have been designed to address the pain point of limited monitoring, providing users with valuable insights into the performance and health of their data integration workflows.
21-
![NOTE]
22-
> These metrics are only valid when enabling Time-To-Live in managed virtual network integration runtime.
2326

24-
Azure Data Factory provides three distinct types of compute pools, each tailored to handle specific activity execution requirements. These compute pools offer flexibility and scalability to accommodate diverse workloads and ensure optimal resource allocation:
25-
- Compute for Copy activity
26-
- Compute for Pipeline activity such as Lookup
27-
- Compute for External activity such as Databricks notebook
27+
The introduction of new metrics enhances the visibility and monitoring capabilities within managed virtual network environments.
28+
29+
Azure Data Factory provides three distinct types of compute pools:
2830

29-
To ensure consistent and comprehensive monitoring across all compute pools, we have implemented the same sets of monitoring metrics.
30-
- Capacity Utilization
31-
- Available Capacity Percentage
32-
- Waiting Queue Length
31+
- Compute for a copy activity
32+
- Compute for a pipeline activity, such as a lookup
33+
- Compute for an external activity, such as an Azure Databricks notebook
3334

34-
Regardless of the type of compute pool being used, users can access and analyze a standardized set of metrics to gain insights into the performance and health of their data integration activities.
35+
These compute pools offer flexibility and scalability to accommodate diverse workloads and allocate resources optimally. Each is tailored to handle specific activity execution requirements.
36+
37+
To help ensure consistent and comprehensive monitoring across all compute pools, we've implemented the same sets of monitoring metrics:
38+
39+
- Capacity utilization
40+
- Available capacity percentage
41+
- Waiting queue length
42+
43+
Regardless of the type of compute pool that you're using, you can access and analyze a standardized set of metrics to gain insights into the performance and health of your data integration activities.
44+
45+
> [!NOTE]
46+
> These metrics are valid only when you're enabling time-to-live (TTL) in an integration runtime within a managed virtual network.
3547
3648
|Metric|Unit|Description|
3749
|------|----|-----------|
38-
|Copy capacity utilization of MVNet integration runtime|Percent|The maximum percentage of DIU utilization for managed vNet Integration runtime time-to-live copy activities within 1-minute window.|
39-
|Copy available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for managed vNet Integration runtime time-to-live copy activities within 1-minute window.|
40-
|Copy waiting queue length of MVNet integration runtime|Count|The waiting queue length of managed vNet Integration runtime time-to-live copy activities within 1-minute window.|
41-
|Pipeline capacity utilization of MVNet integration runtime|Percent|The maximum percentage of DIU utilization for managed vNet Integration runtime pipeline activities within 1-minute window.|
42-
|Pipeline available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for managed vNet Integration runtime pipeline activities within 1-minute window.|
43-
|Pipeline waiting queue length of MVNet integration runtime|Count|The waiting queue length of managed vNet Integration runtime pipeline activities within 1-minute window.|
44-
|External capacity utilization of MVNet integration runtime|Percent|The maximum percentage of DIU utilization for managed vNet Integration runtime external activities within 1-minute window.|
45-
|External available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for managed vNet Integration runtime external activities within 1-minute window.|
46-
|External waiting queue length of MVNet integration runtime|Count|The waiting queue length of managed vNet Integration runtime external activities within 1-minute window.|
50+
|Copy capacity utilization of MVNet integration runtime|Percent|The maximum percentage of Data Integration Unit (DIU) utilization for TTL copy activities in a managed virtual network's integration runtime within a 1-minute window.|
51+
|Copy available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for TTL copy activities in a managed virtual network's integration runtime within a 1-minute window.|
52+
|Copy waiting queue length of MVNet integration runtime|Count|The waiting queue length of TTL copy activities in a managed virtual network's integration runtime within a 1-minute window.|
53+
|Pipeline capacity utilization of MVNet integration runtime|Percent|The maximum percentage of DIU utilization for pipeline activities in a managed virtual network's integration runtime within a 1-minute window.|
54+
|Pipeline available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for pipeline activities in a managed virtual network's integration runtime within a 1-minute window.|
55+
|Pipeline waiting queue length of MVNet integration runtime|Count|The waiting queue length of pipeline activities in a managed virtual network's integration runtime within a 1-minute window.|
56+
|External capacity utilization of MVNet integration runtime|Percent|The maximum percentage of DIU utilization for external activities in a managed virtual network's integration runtime within a 1-minute window.|
57+
|External available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for external activities in a managed virtual network's integration runtime within a 1-minute window.|
58+
|External waiting queue length of MVNet integration runtime|Count|The waiting queue length of external activities in a managed virtual network's integration runtime within a 1-minute window.|
4759

4860
## Using metrics for performance optimization
49-
By using these metrics, you can seamlessly track and assess the performance and robustness of your integration runtime within a managed virtual network. Moreover, you can uncover potential areas for continuous improvement by optimizing the compute settings and workflow to maximize efficiency.
5061

51-
To provide further clarity on the practical application of these metrics, here are a few example scenarios:
62+
By using the metrics, you can seamlessly track and assess the performance and robustness of your integration runtime within a managed virtual network. You can also uncover potential areas for continuous improvement by optimizing the compute settings and workflow to maximize efficiency.
63+
64+
To provide more clarity on the practical application of these metrics, here are a few example scenarios.
5265

5366
### Balanced
54-
If you observe that the Capacity Utilization is below 100% and the Available Capacity Percentage is high, it indicates that the compute resources you have reserved are being efficiently utilized. Additionally, if the Waiting Queue Length remains consistently low or experiences occasional short spikes, it's advisable to queue other activities until the Capacity Utilization reaches 100%. This ensures optimal utilization of resources and helps maintain a smooth workflow with minimal delays.
5567

56-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png" alt-text="Screenshot of managed virtual network integration runtime balanced scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png":::
68+
If you observe that capacity utilization is below 100 percent and the available capacity percentage is high, the compute resources that you reserved are being efficiently utilized.
5769

58-
### Performance-oriented
59-
If you observe that the Capacity Utilization is consistently low, and the Waiting Queue Length remains consistently low or experiences occasional short spikes, it indicates that the compute resources you have reserved are higher than the actual demand for activities. In such cases, regardless of whether the Available Capacity Percentage is high or low, it's recommended to reduce the allocated compute resources to lower your costs. By rightsizing the compute to match the actual workload requirements, you can optimize your resource utilization and achieve cost savings without compromising the efficiency of your operations.
70+
If the waiting queue length remains consistently low or experiences occasional short spikes, we advise you to queue other activities until the capacity utilization reaches 100 percent. This approach helps ensure optimal utilization of resources and helps maintain a smooth workflow with minimal delays.
6071

61-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png" alt-text="Screenshot of managed virtual network integration runtime performance oriented scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png":::
72+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png" alt-text="Screenshot of a balanced scenario for an integration runtime within a managed virtual network." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png":::
6273

63-
### Cost-oriented
64-
If you notice that all metrics, including Capacity Utilization, Available Capacity Percentage, and Waiting Queue Length, are high, it suggests that the compute resources you have reserved are insufficient for your activities. In this scenario, it's recommended to increase the allocated compute resources to reduce queue time. By adding more compute capacity, you can ensure that your activities have sufficient resources to execute efficiently, minimizing any delays caused by a crowded queue.
74+
### Performance oriented
6575

66-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png" alt-text="Screenshot of managed virtual network integration runtime cost oriented scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png":::
76+
If you observe that capacity utilization is consistently low, and the waiting queue length remains consistently low or experiences occasional short spikes, the compute resources that you reserved are higher than the demand for activities.
77+
78+
In such cases, regardless of whether the available capacity percentage is high or low, we recommend that you reduce the allocated compute resources to lower your costs. By rightsizing the compute to match the workload requirements, you can optimize your resource utilization and save costs without compromising the efficiency of your operations.
79+
80+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png" alt-text="Screenshot of a performance-oriented scenario for an integration runtime within a managed virtual network." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png":::
81+
82+
### Cost oriented
83+
84+
If you notice that all metrics (including capacity utilization, available capacity percentage, and waiting queue length) are high, the compute resources that you reserved are likely insufficient for your activities.
85+
86+
In this scenario, we recommend that you increase the allocated compute resources to reduce queue time. Adding more compute capacity helps ensure that your activities have sufficient resources to run efficiently, which minimizes any delays that a crowded queue causes.
87+
88+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png" alt-text="Screenshot of a cost-oriented scenario for an integration runtime within a managed virtual network." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png":::
6789

6890
### Intermittent activity execution
69-
If you notice that the Available Capacity Percentage fluctuates between low and high within a specific time period, it's likely due to the intermittent execution of your activities, where the Time-To-Live (TTL) period you have configured is shorter than the interval between your activities. This can have a significant impact on the performance of your workflow and can increase costs, as we charge for the warm-up time of the compute for up to 2 minutes.
70-
To address this issue, there are two possible solutions. First, you can queue more activities to maintain a consistent workload and utilize the available compute resources more effectively. By keeping the compute continuously engaged, you can avoid the warm-up time and achieve better performance.
71-
Alternatively, you can consider enlarging the TTL period to align with the interval between your activities. This ensures that the compute resources remain available for a longer duration, reducing the frequency of warm-up periods and optimizing cost-efficiency.
91+
92+
If you notice that the available capacity percentage fluctuates between low and high within a specific time period, it's likely due to the intermittent execution of your activities. That is, the TTL period that you configured is shorter than the interval between your activities. This problem can have a significant impact on the performance of your workflow and can increase costs, because we charge for the warm-up time of the compute for up to 2 minutes.
93+
94+
To address this problem, there are two possible solutions:
95+
96+
- Queue more activities to maintain a consistent workload and utilize the available compute resources more effectively. By keeping the compute continuously engaged, you can avoid the warm-up time and achieve better performance.
97+
- Consider enlarging the TTL period to align with the interval between your activities. This approach keeps the compute resources available for a longer duration, which reduces the frequency of warm-up periods and optimizes cost efficiency.
98+
7299
By implementing either of these solutions, you can enhance the performance of your workflow, minimize cost implications, and ensure a smoother execution of your intermittent activities.
73100

74-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png" alt-text="Screenshot of managed virtual network integration runtime intermittent activity scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png":::
101+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png" alt-text="Screenshot of an intermittent activity scenario for an integration runtime within a managed virtual network." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png":::
75102

76103
## Next steps
77-
Advance to the following tutorial to learn about Managed Virtual Network: [Managed virtual network and managed private endpoints](managed-virtual-network-private-endpoint.md).
104+
105+
Advance to the following article to learn about managed virtual networks and managed private endpoints: [Azure Data Factory managed virtual network](managed-virtual-network-private-endpoint.md).

0 commit comments

Comments
 (0)