-
Notifications
You must be signed in to change notification settings - Fork 29
Description
there is a practical maximum data transfer size, which is 1024 pages currently: PRPs translate into iovec and IOV_MAX is 1024. if a guest NVMe driver submits three PRP lists worth of read/write pages we'll have >1024 iovec, p{read,write}v will return EINVAL, and eventually this will get back to the guest as an error for the NVMe command.
on MDTS from the spec:
The value is in units of the minimum memory page size (CAP.MPSMIN) and is reported as a power of two (2^n). A value of 0h indicates that there is no maximum data transfer size.
the default MDTS of zero if the caller of PciNvme::create() doesn't provide one is definitely misleading. in practice currently the practical bounds here would be 0 < MDTS <= [derived from IOV_MAX], which happens to currently be 0 < MDTS <= 10.
I'm mildly curious what the maximum sizes a guest would choose to use, because empirically Linux seems to limit to 256KiB, well below both 4MiB and "no limit". that observation could, of course, be that fio broke up larger I/Os, I dunno.