You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/irq-tuning-guide/_index.md
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,16 @@
1
1
---
2
-
title: Learn about the impact of network interrupts on cloud workloads
2
+
title: Optimize network interrupt handling on Arm servers
3
3
4
-
draft: true
5
-
cascade:
6
-
draft: true
7
-
4
+
8
5
minutes_to_complete: 20
9
6
10
-
who_is_this_for: This is a specialized topic for developers and performance engineers who are interested in understanding how network interrupt patterns can impact performance on cloud servers.
7
+
who_is_this_for: This is an introductory topic for developers and performance engineers who are interested in understanding how network interrupt patterns can impact performance on cloud servers.
11
8
12
9
learning_objectives:
13
10
- Analyze the current interrupt request (IRQ) layout on an Arm Linux system
14
11
- Experiment with different interrupt options and patterns to improve performance
12
+
- Configure optimal IRQ distribution strategies for your workload
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/irq-tuning-guide/conclusion.md
+25-8Lines changed: 25 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,18 +34,35 @@ For larger systems with more than 16 vCPUs, the findings are different:
34
34
35
35
On larger systems, the overhead of interrupt handling is proportionally smaller compared to the available processing power. The main performance bottleneck occurs when multiple high-frequency network interrupts compete for the same core.
36
36
37
-
## Implementation Considerations
37
+
## Implementation considerations
38
38
39
-
When implementing these IRQ management strategies, there are some important points to keep in mind.
39
+
When implementing these IRQ management strategies, several factors influence your success.
40
40
41
-
Pay attention to the workload type. CPU-bound applications may benefit from different IRQ patterns than I/O-bound applications.
41
+
Consider your workload type first, as CPU-bound applications can benefit from different IRQ patterns than I/O-bound applications. Always benchmark your specific workload with different IRQ patterns rather than assuming one approach works universally.
42
42
43
-
Always benchmark your specific workload with different IRQ patterns.
43
+
For real-time monitoring, use `watch -n1 'grep . /proc/interrupts'` to observe IRQ distribution as it happens. This helps you verify your changes are working as expected.
44
44
45
-
Monitor IRQ counts in real-time using `watch -n1 'grep . /proc/interrupts'` to observe IRQ distribution in real-time.
45
+
On multi-socket systems, NUMA effects become important. Keep IRQs on cores close to the PCIe devices generating them to minimize cross-node memory access latency. Additionally, ensure your IRQ affinity settings persist across reboots by adding them to `/etc/rc.local` or creating a systemd service file.
46
46
47
-
Also consider NUMA effects on multi-socket systems. Keep IRQs on cores close to the PCIe devices generating them to minimize cross-node memory access.
47
+
As workloads and hardware evolve, revisiting and adjusting IRQ management strategies may be necessary to maintain optimal performance. What works well today might need refinement as your application scales or changes.
48
48
49
-
Make sure to set up IRQ affinity settings in `/etc/rc.local` or a systemd service file to ensure they persist across reboots.
49
+
## Next Steps
50
50
51
-
Remember that as workloads and hardware evolve, revisiting and adjusting IRQ management strategies may be necessary to maintain optimal performance.
51
+
You have successfully learned how to optimize network interrupt handling on Arm servers. You can now analyze IRQ distributions, implement different management patterns, and configure persistent solutions for your workloads.
52
+
53
+
### Learn more about Arm server performance
54
+
55
+
*[Deploy applications on Arm-based cloud instances](../csp/)
56
+
*[Get started with performance analysis using Linux Perf](../../../install-guides/perf/)
57
+
*[Optimize server applications for Arm Neoverse processors](../mongodb/)
58
+
59
+
### Explore related topics
60
+
61
+
*[Understanding Arm server architecture](../arm-cloud-native-performance/)
62
+
*[Cloud migration strategies for Arm](../migration/)
*[Network performance tuning best practices](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/)
0 commit comments