Skip to content

Conversation

@rmvasuki
Copy link
Contributor

HPE Alletra Storage arrays support NVMe-oF block access.

  1. The "round-robin" iopolicy is preferred for performance benefits.
  2. Setting ctrl_loss_tmo to -1 for NVMe/TCP controllers enables indefinite reconnect attempts after a path loss, and disables purging of the path on the host.

HPE Alletra Storage arrays support NVMe-oF block access.
1. The "round-robin" iopolicy is preferred for performance benefits.
2. Setting ctrl_loss_tmo to -1 for NVMe/TCP controllers enables
   indefinite reconnect attempts after a path loss,
   and disables purging of the path on the host.

Signed-off-by: Vasuki Manikarnike <[email protected]>
@igaw
Copy link
Collaborator

igaw commented Jul 15, 2025

I don't have any problems with adding this rule, but are you sure about round-robin? Did you have a look at queue-depth? round-robin has severe limits, e.g. it doesn't check the utilization of a link, if one link fails, all others are taken out as well.

@igaw igaw merged commit 15bf951 into linux-nvme:master Jul 25, 2025
17 checks passed
@igaw
Copy link
Collaborator

igaw commented Jul 25, 2025

I'll merged it for now. If someone complains, it's easy to change :)

@rmvasuki
Copy link
Contributor Author

I don't have any problems with adding this rule, but are you sure about round-robin? Did you have a look at queue-depth? round-robin has severe limits, e.g. it doesn't check the utilization of a link, if one link fails, all others are taken out as well.

Sorry, I missed responding to this. I will investigate queue-depth further and report back here.
Could you elaborate a bit on the single link failure taking down other links? We haven’t seen that kind of behaviour in our testing so far.

thanks!

@igaw
Copy link
Collaborator

igaw commented Jul 25, 2025

Ah, no worries.

It's been a while since I last played with round-robin, but IIRC, the failing path significantly impacted the good paths for a while. It might also be related to the transport used, e.g., FC. Anyway, I can't remember the details, but generally, we (SUSE) advise using queue-depth as it results in better performance and handles error cases more effectively. @mwilck has a bit more experience with this.

@rmvasuki rmvasuki deleted the udev-rules-hpe-alletra branch July 26, 2025 08:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants