Skip to content

Commit 60cfea1

Browse files
authored
lightbox and non-blocking headers
1 parent e24e7f8 commit 60cfea1

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/data-factory/monitor-managed-virtual-network-integration-runtime.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ ms.author: lle
1010
ms.custom:
1111
---
1212

13-
# Enhanced Monitoring with Managed Virtual Network Integration Runtime
13+
# Enhanced monitoring with Managed Virtual Network Integration Runtime
1414
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
1515
Azure Data Factory Managed Virtual Network is a feature that allows you to securely connect your data sources to a virtual network managed by Azure Data Factory service. By using this capability, you can establish a private and isolated environment for your data integration and orchestration processes. By using Azure Data Factory Managed Virtual Network, you can combine the power of Azure Data Factory's data integration and orchestration capabilities with the security and flexibility provided by Azure virtual networks. It empowers you to build robust, scalable, and secure data integration pipelines that seamlessly connect to your network resources, whether they're on-premises or in the cloud.
1616
One common pain point of managed compute is the lack of visibility into the performance and health especially within a managed virtual network environment. Without proper monitoring, identifying and resolving issues becomes challenging, leading to potential delays, errors, and performance degradation.
1717
By using our new enhanced monitoring feature, users can gain valuable insights into their data integration processes, leading to improved efficiency, better resource utilization, and enhanced overall performance. With proactive monitoring and timely alerts, users can proactively address issues, optimize workflows, and ensure the smooth execution of their data integration pipelines within the managed virtual network environment.
1818

19-
## New Metrics
19+
## New metrics
2020
The introduction of the new metrics in the Managed Virtual Network Integration Runtime feature significantly enhances the visibility and monitoring capabilities within virtual network environments. These new metrics have been designed to address the pain point of limited monitoring, providing users with valuable insights into the performance and health of their data integration workflows.
2121
Azure Data Factory provides three distinct types of compute pools, each tailored to handle specific activity execution requirements. These compute pools offer flexibility and scalability to accommodate diverse workloads and ensure optimal resource allocation:
2222
- Compute for Copy activity
@@ -42,33 +42,33 @@ Regardless of the type of compute pool being used, users can access and analyze
4242
|External available capacity percentage of MVNet integration runtime|Percent|The maximum percentage of available DIU for managed vNet Integration runtime external activities within 1-minute window.|
4343
|External waiting queue length of MVNet integration runtime|Count|The waiting queue length of managed vNet Integration runtime external activities within 1-minute window.|
4444

45-
## Using Metrics for Performance Optimization
45+
## Using metrics for performance optimization
4646
By using these metrics, you can seamlessly track and assess the performance and robustness of your integration runtime within a managed virtual network. Moreover, you can uncover potential areas for continuous improvement by optimizing the compute settings and workflow to maximize efficiency.
4747

4848
To provide further clarity on the practical application of these metrics, here are a few example scenarios:
4949

5050
### Balanced
5151
If you observe that the Capacity Utilization is below 100% and the Available Capacity Percentage is high, it indicates that the compute resources you have reserved are being efficiently utilized. Additionally, if the Waiting Queue Length remains consistently low or experiences occasional short spikes, it's advisable to queue other activities until the Capacity Utilization reaches 100%. This ensures optimal utilization of resources and helps maintain a smooth workflow with minimal delays.
5252

53-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png" alt-text="Screenshot of managed virtual network integration runtime balanced scenario.":::
53+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png" alt-text="Screenshot of managed virtual network integration runtime balanced scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-balanced.png":::
5454

5555
### Performance-oriented
5656
If you observe that the Capacity Utilization is consistently low, and the Waiting Queue Length remains consistently low or experiences occasional short spikes, it indicates that the compute resources you have reserved are higher than the actual demand for activities. In such cases, regardless of whether the Available Capacity Percentage is high or low, it's recommended to reduce the allocated compute resources to lower your costs. By rightsizing the compute to match the actual workload requirements, you can optimize your resource utilization and achieve cost savings without compromising the efficiency of your operations.
5757

58-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png" alt-text="Screenshot of managed virtual network integration runtime performance oriented scenario.":::
58+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png" alt-text="Screenshot of managed virtual network integration runtime performance oriented scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-performance-oriented.png":::
5959

60-
### Cost-Oriented
60+
### Cost-oriented
6161
If you notice that all metrics, including Capacity Utilization, Available Capacity Percentage, and Waiting Queue Length, are high, it suggests that the compute resources you have reserved are insufficient for your activities. In this scenario, it's recommended to increase the allocated compute resources to reduce queue time. By adding more compute capacity, you can ensure that your activities have sufficient resources to execute efficiently, minimizing any delays caused by a crowded queue.
6262

63-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png" alt-text="Screenshot of managed virtual network integration runtime cost oriented scenario.":::
63+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png" alt-text="Screenshot of managed virtual network integration runtime cost oriented scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-cost-oriented.png:::
6464

65-
### Intermittent Activity Execution
65+
### Intermittent activity execution
6666
If you notice that the Available Capacity Percentage fluctuates between low and high within a specific time period, it's likely due to the intermittent execution of your activities, where the Time-To-Live (TTL) period you have configured is shorter than the interval between your activities. This can have a significant impact on the performance of your workflow and can increase costs, as we charge for the warm-up time of the compute for up to 2 minutes.
6767
To address this issue, there are two possible solutions. First, you can queue more activities to maintain a consistent workload and utilize the available compute resources more effectively. By keeping the compute continuously engaged, you can avoid the warm-up time and achieve better performance.
6868
Alternatively, you can consider enlarging the TTL period to align with the interval between your activities. This ensures that the compute resources remain available for a longer duration, reducing the frequency of warm-up periods and optimizing cost-efficiency.
6969
By implementing either of these solutions, you can enhance the performance of your workflow, minimize cost implications, and ensure a smoother execution of your intermittent activities.
7070

71-
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png" alt-text="Screenshot of managed virtual network integration runtime intermittent activity scenario.":::
71+
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png" alt-text="Screenshot of managed virtual network integration runtime intermittent activity scenario." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png":::
7272

73-
## Next Steps
74-
Advance to the following tutorial to learn about Managed Virtual Network: [Managed virtual network and managed private endpoints](managed-virtual-network-private-endpoint.md).
73+
## Next steps
74+
Advance to the following tutorial to learn about Managed Virtual Network: [Managed virtual network and managed private endpoints](managed-virtual-network-private-endpoint.md).

0 commit comments

Comments
 (0)