You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Network-related IRQs can be identified by looking at the "Device" column in the output.
53
+
Network-related IRQs can be identified by looking at the **Device** column in the output.
54
54
55
55
You can identify network interfaces using the command:
56
56
57
57
```bash
58
58
ip link show
59
59
```
60
60
61
-
Here are some common patterns to look for:
61
+
Look for common interface naming patterns in the output. Traditional ethernet interfaces use names like `eth0`, while wireless interfaces typically appear as `wlan0`. Modern Linux systems often use the predictable naming scheme, which creates names like `enP3p3s0f0` and `ens5-Tx-Rx-0`.
62
62
63
-
Common interface naming patterns include `eth0` for traditional ethernet, `enP3p3s0f0` and `ens5-Tx-Rx-0` for the Linux predictable naming scheme, or `wlan0` for wireless.
64
-
65
-
The predictable naming scheme breaks down into:
66
-
67
-
- en = ethernet
68
-
- P3 = PCI domain 3
69
-
- p3 = PCI bus 3
70
-
- s0 = PCI slot 0
71
-
- f0 = function 0
72
-
73
-
This naming convention helps ensure network interfaces have consistent names across reboots by encoding their physical
74
-
location in the system.
63
+
The predictable naming scheme encodes the physical location within the interface name. For example, `enP3p3s0f0` breaks down as: `en` for ethernet, `P3` for PCI domain 3, `p3` for PCI bus 3, `s0` for PCI slot 0, and `f0` for function 0. This naming convention helps ensure network interfaces maintain consistent names across reboots by encoding their physical location in the system.
75
64
76
65
## Improve performance
77
66
78
-
Once you've identified the network IRQs, you can adjust their CPU assignments to try to improve performance.
67
+
Once you've identified the network IRQs, you can adjust their CPU assignments to improve performance.
79
68
80
69
Identify the NIC (Network Interface Card) IRQs and adjust the system by experimenting and seeing if performance improves.
81
70
82
-
You may notice that some NIC IRQs are assigned to the same CPU cores by default, creating duplicate assignments.
71
+
You might notice that some NIC IRQs are assigned to the same CPU cores by default, creating duplicate assignments.
When network IRQs are assigned to the same CPU cores (as shown in the example above where IRQ 101 and 104 both use CPU 12), this can potentially hurt performance as multiple interrupts compete for the same CPU core's attention, while other cores remain underutilized.
87
+
When network IRQs are assigned to the same CPU cores (as shown in the example above where IRQ 101 and 104 both use CPU 12), this can potentially degrade performance as multiple interrupts compete for the same resources, while other cores remain underutilized.
99
88
100
89
By optimizing IRQ distribution, you can achieve more balanced processing and improved throughput. This optimization is especially important for high-traffic servers where network performance is critical.
101
90
102
-
Suggested experiments are covered in the next section.
91
+
{{% notice Note%}} There are suggestions for experiments in the next section. {{% /notice %}}
103
92
104
-
###How can I reset my IRQs if I make performance worse?
93
+
## How can I reset my IRQs if I worsen performance?
105
94
106
95
If your experiments reduce performance, you can return the IRQs back to default using the following commands:
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/irq-tuning-guide/conclusion.md
+20-40Lines changed: 20 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,61 +8,41 @@ layout: learningpathall
8
8
9
9
## Optimal IRQ Management Strategies
10
10
11
-
Testing across multiple cloud platforms reveals that IRQ management effectiveness varies significantly based on system size and workload characteristics. No single pattern works optimally for all scenarios, but clear patterns emerged during performance testing under heavy network loads.
11
+
Performance testing across multiple cloud platforms shows that IRQ management effectiveness depends heavily on system size and workload characteristics. While no single approach works optimally in all scenarios, clear patterns emerged during testing under heavy network loads.
12
12
13
-
## Recommendations by system size
13
+
## Recommendations for systems with 16 vCPUs or less
14
14
15
-
### Systems with 16 vCPUs or less
15
+
For smaller systems with 16 or fewer vCPUs, different strategies prove more effective:
16
16
17
-
For smaller systems with 16 or less vCPUs, concentrated IRQ assignment may provide measurable performance improvements.
17
+
- Concentrate network IRQs on just one or two CPU cores rather than spreading them across all available cores.
18
+
- Use the `smp_affinity` range assignment pattern with a limited core range (example: `0-1`).
19
+
- This approach works best when the number of NIC IRQs exceeds the number of available vCPUs.
20
+
- Focus on high-throughput network workloads where concentrated IRQ handling delivers the most significant performance improvements.
18
21
19
-
- Assign all network IRQs to just one or two CPU cores
20
-
- This approach showed the most significant performance gains
21
-
- Most effective when the number of NIC IRQs exceeds the number of vCPUs
22
-
- Use the `smp_affinity` range assignment pattern from the previous section with a very limited core range, for example `0-1`
22
+
Performance improves significantly when network IRQs are concentrated rather than dispersed across all available cores on smaller systems. This concentration reduces context switching overhead and improves cache locality for interrupt handling.
23
23
24
-
Performance improves significantly when network IRQs are concentrated rather than dispersed across all available cores on smaller systems.
24
+
## Recommendations for systems with more than 16 vCPUs
25
25
26
-
### Systems with more than 16 vCPUs
26
+
For larger systems with more than 16 vCPUs, different strategies prove more effective:
27
27
28
-
For larger systems with more than 16 vCPUs, the findings are different:
28
+
- Default IRQ distribution typically delivers good performance.
29
+
- Focus on preventing multiple network IRQs from sharing the same CPU core.
30
+
- Use the diagnostic scripts from the previous section to identify and resolve overlapping IRQ assignments.
31
+
- Apply the paired core pattern to ensure balanced distribution across the system.
29
32
30
-
- Default IRQ distribution generally performs well
31
-
- The primary concern is avoiding duplicate core assignments for network IRQs
32
-
- Use the scripts from the previous section to check and correct any overlapping IRQ assignments
33
-
- The paired core pattern can help ensure optimal distribution on these larger systems
34
-
35
-
On larger systems, the overhead of interrupt handling is proportionally smaller compared to the available processing power. The main performance bottleneck occurs when multiple high-frequency network interrupts compete for the same core.
33
+
On larger systems, interrupt handling overhead becomes less significant relative to total processing capacity. The primary performance issue occurs when high-frequency network interrupts compete for the same core, creating bottlenecks.
36
34
37
35
## Implementation considerations
38
36
39
-
When implementing these IRQ management strategies, several factors influence your success.
40
-
41
-
Consider your workload type first, as CPU-bound applications can benefit from different IRQ patterns than I/O-bound applications. Always benchmark your specific workload with different IRQ patterns rather than assuming one approach works universally.
37
+
When implementing these IRQ management strategies, several factors influence your success:
42
38
43
-
For real-time monitoring, use `watch -n1 'grep . /proc/interrupts'` to observe IRQ distribution as it happens. This helps you verify your changes are working as expected.
39
+
- Consider your workload type first, as CPU-bound applications can benefit from different IRQ patterns than I/O-bound applications. Always benchmark your specific workload with different IRQ patterns rather than assuming one approach works universally.
40
+
- For real-time monitoring, use `watch -n1 'grep . /proc/interrupts'` to observe IRQ distribution as it happens. This helps you verify your changes are working as expected.
41
+
- On multi-socket systems, NUMA effects become important. Keep IRQs on cores close to the PCIe devices generating them to minimize cross-node memory access latency. Additionally, ensure your IRQ affinity settings persist across reboots by adding them to `/etc/rc.local` or creating a systemd service file.
44
42
45
-
On multi-socket systems, NUMA effects become important. Keep IRQs on cores close to the PCIe devices generating them to minimize cross-node memory access latency. Additionally, ensure your IRQ affinity settings persist across reboots by adding them to `/etc/rc.local` or creating a systemd service file.
46
-
47
-
As workloads and hardware evolve, revisiting and adjusting IRQ management strategies may be necessary to maintain optimal performance. What works well today might need refinement as your application scales or changes.
43
+
As workloads and hardware evolve, revisiting and adjusting IRQ management strategies might be necessary to maintain optimal performance. What works well today might need refinement as your application scales or changes.
48
44
49
45
## Next Steps
50
46
51
47
You have successfully learned how to optimize network interrupt handling on Arm servers. You can now analyze IRQ distributions, implement different management patterns, and configure persistent solutions for your workloads.
52
48
53
-
### Learn more about Arm server performance
54
-
55
-
*[Deploy applications on Arm-based cloud instances](../csp/)
56
-
*[Get started with performance analysis using Linux Perf](../../../install-guides/perf/)
57
-
*[Optimize server applications for Arm Neoverse processors](../mongodb/)
58
-
59
-
### Explore related topics
60
-
61
-
*[Understanding Arm server architecture](../arm-cloud-native-performance/)
62
-
*[Cloud migration strategies for Arm](../migration/)
*[Network performance tuning best practices](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/)
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/irq-tuning-guide/patterns.md
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,22 +12,20 @@ Different IRQ management patterns can significantly impact network performance a
12
12
13
13
Network interrupt requests (IRQs) can be distributed across CPU cores in various ways, each with potential benefits depending on your workload characteristics and system configuration. By strategically assigning network IRQs to specific cores, you can improve cache locality, reduce contention, and potentially boost overall system performance.
14
14
15
-
The following patterns have been tested on various systems and can be implemented using the provided scripts. An optimal pattern is suggested at the conclusion of this Learning Path, but your specific workload may benefit from a different approach.
15
+
The following patterns have been tested on various systems and can be implemented using the provided scripts. An optimal pattern is suggested at the conclusion of this Learning Path, but your specific workload might benefit from a different approach.
16
16
17
17
## Common IRQ distribution patterns
18
18
19
19
Four main distribution strategies offer different performance characteristics:
20
20
21
-
Default: uses the IRQ pattern provided at boot time by the Linux kernel
22
-
Random: assigns all IRQs to cores without overlap with network IRQs
23
-
Housekeeping: assigns all non-network IRQs to specific dedicated cores
24
-
NIC-focused: assigns network IRQs to single or multiple ranges of cores, including pairs
21
+
-Default: uses the IRQ pattern provided at boot time by the Linux kernel
22
+
-Random: assigns all IRQs to cores without overlap with network IRQs
23
+
-Housekeeping: assigns all non-network IRQs to specific dedicated cores
24
+
-NIC-focused: assigns network IRQs to single or multiple ranges of cores, including pairs
25
25
26
26
## Scripts to implement IRQ management patterns
27
27
28
-
The scripts below demonstrate how to implement different IRQ management patterns on your system. Each script targets a specific distribution strategy:
29
-
30
-
Before running these scripts, identify your network interface name using `ip link show` and determine your system's CPU topology with `lscpu`. Always test these changes in a non-production environment first, as improper IRQ assignment can impact system stability.
28
+
The scripts below demonstrate how to implement different IRQ management patterns on your system. Each script targets a specific distribution strategy. Before running these scripts, identify your network interface name using `ip link show` and determine your system's CPU topology with `lscpu`. Always test these changes in a non-production environment first, as improper IRQ assignment can impact system stability.
31
29
32
30
## Housekeeping pattern
33
31
@@ -43,7 +41,7 @@ for irq in $(awk '/ACPI:Ged/ {sub(":","",$1); print $1}' /proc/interrupts); do
43
41
done
44
42
```
45
43
46
-
###Paired core pattern
44
+
## Paired core pattern
47
45
48
46
The paired core assignment pattern distributes network IRQs across CPU core pairs for better cache coherency.
49
47
@@ -66,7 +64,7 @@ for irq in "${irqs[@]}"; do
66
64
done
67
65
```
68
66
69
-
###Range assignment pattern
67
+
## Range assignment pattern
70
68
71
69
The range assignment pattern assigns network IRQs to a specific range of cores, providing dedicated network processing capacity.
72
70
@@ -80,6 +78,6 @@ for irq in $(awk '/'$IFACE'/ {sub(":","",$1); print $1}' /proc/interrupts); do
80
78
done
81
79
```
82
80
83
-
Each pattern offers different performance characteristics depending on your workload. The housekeeping pattern reduces system noise, paired cores optimize cache usage, and range assignment provides dedicated network processing capacity. Test these patterns in your environment to determine which provides the best performance for your specific use case.
81
+
Each pattern offers different performance characteristics depending on your workload. The housekeeping pattern reduces system noise, paired cores optimize cache usage, and range assignment provides dedicated network processing capacity. Improper configuration can degrade performance or stability, so always test these patterns in a non-production environment to determine which provides the best results for your specific use case.
84
82
85
83
Continue to the next section for additional guidance.
0 commit comments