Skip to content

Conversation

@ikegami-t
Copy link
Contributor

Since added the NVMe 2.1 log page.

@ikegami-t
Copy link
Contributor Author

The changes depended on the libnvme PR linux-nvme/libnvme#961.

Include nvmf_exat_ptr_next export.

Signed-off-by: Tokunori Ikegami <[email protected]>
Since added the NVMe 2.1 log page.

Signed-off-by: Tokunori Ikegami <[email protected]>
Since copied from the exsisting completion but missed to change.

Signed-off-by: Tokunori Ikegami <[email protected]>
Both bash and zsh completions updated for the command.

Signed-off-by: Tokunori Ikegami <[email protected]>
@igaw
Copy link
Collaborator

igaw commented Feb 24, 2025

There is already code for printing the discovery log: nvme_show_discovery_log

And the print out can be triggered via nvme discover. Is there something missing?

@ikegami-t
Copy link
Contributor Author

ikegami-t commented Feb 24, 2025

The nvme_show_discovery_log() prints the discovery log page (log identifier 70h) but this command outputs the host discovery log page (log identifier 71h). Sorry I am not sure if the nvme discover command can be added the host discovery log page (log indentifier 71h) or not so just added the command in nvme.c directly but not added in fabrics.c.
(Added)
Note: Now I am trying to add the AVE discovery log page (log identifier 72h) also as same.

@ikegami-t
Copy link
Contributor Author

Updated the changes to bump the libnvme.

@igaw
Copy link
Collaborator

igaw commented Feb 24, 2025

Ah I see. So this is something for the MI based devices then.

-----------
Retrieve Host Discovery Log, show it

The <device> parameter is mandatory NVMe character device (ex: /dev/nvme0).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spec says The Host Discovery log page shall not be supported by controllers that expose namespaces for NVMe over PCIe or NVMe over Fabrics thus the kernel will not expose a /dev/nvme0 interface for this, right? I assume this for MI devices.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes seems so if the MI devices supported by the discovery controller but actually I have not seen the MI devices. So is it okay to fix the documentation description example /dev/nvme0 to mctp:<net>,<eid>[:ctrl-id] supported by the discovery controller? But I am thinking also to fix the command as the nvme discover command as you mentioned for now since the command can be specified the discovery controller device actually. Is it okay to be done separately later?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now just updated the documentation description to change the NVMe character device to the MI device.

Added the nvme-host-discovery-log.txt file.

Signed-off-by: Tokunori Ikegami <[email protected]>
@igaw
Copy link
Collaborator

igaw commented Feb 25, 2025

@jk-ozlabs could have a look here? I think it's correct now but I don't have any MI devices to play with so can't even test it.

@jk-ozlabs
Copy link
Collaborator

I haven't dealt with discovery controllers previously, so can't provide much input on the command implementation, but this seems to be fine in regards to the existing log page support that is used on MI.

We have the base MI command supported in libnvme's nvme_mi_admin_get_log_host_discovery (added at linux-nvme/libnvme@3a304c23d ). So the underlying command implementation is already present here.

However, how has this been tested? you mention you don't have access to MI devices...

@ikegami-t
Copy link
Contributor Author

However, how has this been tested? you mention you don't have access to MI devices...

The print code can be tested with the test data for debugging locally and the get log page command itself can be ensured as other get log page identifier supported already then the command added can be made sure basically I think.

@igaw
Copy link
Collaborator

igaw commented Feb 26, 2025

you mention you don't have access to MI devices...

I don't have many devices to test on. Sometimes it would be helpful but luckily there are many users reporting any regression pretty fast :)

@igaw igaw merged commit 2b7a6d7 into linux-nvme:master Feb 26, 2025
18 checks passed
@igaw
Copy link
Collaborator

igaw commented Feb 26, 2025

Thanks!

@jk-ozlabs
Copy link
Collaborator

I don't have many devices to test on. Sometimes it would be helpful but luckily there are many users reporting any regression pretty fast :)

oh, that was more for @ikegami-t . Maybe we should hook you up with a bit of a MCTP test environment? :)

@igaw
Copy link
Collaborator

igaw commented Feb 27, 2025

I wonder if we could setup something similar to the nightly build. Need to talk to @MaisenbacherD about this :)

@ikegami-t
Copy link
Contributor Author

Just started to study MCTP as searched the pages below by Google and confirmed already cloned the github mctp repository and built the mctp but unfortunately looks I do not have the MCTP environment host PC and device drive locally (also two drives broken before..). Anyway I will do study more later. Thank you.

  1. https://codeconstruct.com.au/docs/mctp-on-linux-introduction/
  2. https://github.com/CodeConstruct/mctp
  3. https://github.com/linux-nvme/libnvme/blob/master/examples/mi-mctp.c
  4. https://www.dmtf.org/sites/default/files/standards/documents/DSP0236_1.3.1.pdf

@ikegami-t ikegami-t mentioned this pull request Feb 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants