You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/operator-nexus/troubleshoot-packet-loss.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,25 +42,25 @@ Connecting to host <server-ip>, port 5201
42
42
iperf Done.
43
43
```
44
44
45
-
## Troubleshooting Steps
45
+
## Troubleshooting steps
46
46
The following troubleshooting steps can be used for diagnosing the cluster.
47
47
48
-
### Gather Information
48
+
### Gather information
49
49
To assist with the troubleshooting process, please gather and provide the following cluster information:
50
50
51
51
* Subscription ID: the unique identifier of your Azure subscription.
52
-
* Tenant ID: the unique identifier of your Azure Active Directory (AAD) tenant.
52
+
* Tenant ID: the unique identifier of your Microsoft Entra tenant.
53
53
* Undercloud Name: the name of the undercloud resource associated with your deployment.
54
54
* Undercloud Resource Group: the resource group containing the undercloud resource.
55
55
* NAKS Cluster Name: the name of the NAKS cluster experiencing issues.
56
56
* NAKS Cluster Resource Group: the resource group containing the NAKS cluster.
57
57
* Inter-Switch Devices (ISD) connected to NAKS: the details of the Inter-Switch Devices (ISDs) that are connected to the NAKS cluster.
58
58
* Source and Destination IPs: the source and destination IP addresses where packet drops are being observed.
59
59
60
-
### Verify Provisioning Status of the Network Fabric
61
-
Verify on Azure Portal that the NF status is in the provisioned state; the Provisioning State should be 'Succeeded' and Configuration State 'Provisioned'.
60
+
### Verify provisioning status of the Network Fabric
61
+
Verify on Azure portal that the NF status is in the provisioned state; the Provisioning State should be 'Succeeded' and Configuration State 'Provisioned'.
62
62
63
-
### View iperf-client Pod Events
63
+
### View iperf-client pod events
64
64
Use kubectl to inspect events from the iperf-client pod for more detailed information. This can help identify the root cause of the issue with the iperf-client pod.
65
65
```console
66
66
kubectl get events --namespace default | grep iperf-client
@@ -71,13 +71,13 @@ NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
71
71
default 5m39s Warning BackOff pod/iperf-client-8f7974984-xr67p Back-off restarting failed container iperf-client in pod iperf-client-8f7974984-xr67p_default(masked-id)
72
72
```
73
73
74
-
### Validate L3 ISD Configuration
74
+
### Validate L3 ISD configuration
75
75
Confirm that the L3 ISD (Layer 3 Isolation Domain) configuration on the devices is correct.
76
76
77
-
## Potential Solutions
77
+
## Potential solutions
78
78
If the iperf-client pod is constantly being restarted and other resource statuses appear to be healthy, the following remedies can be attempted:
79
79
80
-
### Adjust Network Buffer Settings
80
+
### Adjust network buffer settings
81
81
Modify the network buffer settings to improve performance by adjusting the following parameters:
82
82
* net.core.rmem_max: Increase the maximum receive buffer size.
83
83
* net.core.wmem_max: Increase the maximum send buffer size.
0 commit comments