Skip to content

Commit 7cbafa3

Browse files
shroffnikeithbusch
authored andcommitted
nvme-multipath: Add visibility for queue-depth io-policy
This patch helps add nvme native multipath visibility for queue-depth io-policy. It adds a new attribute file named "queue_depth" under namespace device path node which would print the number of active/ in-flight I/O requests currently queued for the given path. For instance, if we have a shared namespace accessible from two different controllers/paths then accessing head block node of the shared namespace would show the following output: $ ls -l /sys/block/nvme1n1/multipath/ nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1 nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1 In the above example, nvme1n1 is head gendisk node created for a shared namespace and the namespace is accessible from nvme1c1n1 and nvme1c3n1 paths. For queue-depth io-policy we can then refer the "queue_depth" attribute file created under each namespace path: $ cat /sys/block/nvme1n1/multipath/nvme1c1n1/queue_depth 518 $cat /sys/block/nvme1n1/multipath/nvme1c3n1/queue_depth 504 >From the above output, we can infer that I/O workload targeted at nvme1n1 uses two paths nvme1c1n1 and nvme1c3n1 and the current queue depth of each path is 518 and 504 respectively. Reading "queue_depth" file when configured io-policy is anything but queue-depth would show no output. Reviewed-by: Sagi Grimberg <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Nilay Shroff <[email protected]> Signed-off-by: Keith Busch <[email protected]>
1 parent 6546cc4 commit 7cbafa3

File tree

3 files changed

+15
-1
lines changed

3 files changed

+15
-1
lines changed

drivers/nvme/host/multipath.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -976,6 +976,18 @@ static ssize_t ana_state_show(struct device *dev, struct device_attribute *attr,
976976
}
977977
DEVICE_ATTR_RO(ana_state);
978978

979+
static ssize_t queue_depth_show(struct device *dev,
980+
struct device_attribute *attr, char *buf)
981+
{
982+
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
983+
984+
if (ns->head->subsys->iopolicy != NVME_IOPOLICY_QD)
985+
return 0;
986+
987+
return sysfs_emit(buf, "%d\n", atomic_read(&ns->ctrl->nr_active));
988+
}
989+
DEVICE_ATTR_RO(queue_depth);
990+
979991
static ssize_t numa_nodes_show(struct device *dev, struct device_attribute *attr,
980992
char *buf)
981993
{

drivers/nvme/host/nvme.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -984,6 +984,7 @@ static inline void nvme_trace_bio_complete(struct request *req)
984984
extern bool multipath;
985985
extern struct device_attribute dev_attr_ana_grpid;
986986
extern struct device_attribute dev_attr_ana_state;
987+
extern struct device_attribute dev_attr_queue_depth;
987988
extern struct device_attribute dev_attr_numa_nodes;
988989
extern struct device_attribute subsys_attr_iopolicy;
989990

drivers/nvme/host/sysfs.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,6 +258,7 @@ static struct attribute *nvme_ns_attrs[] = {
258258
#ifdef CONFIG_NVME_MULTIPATH
259259
&dev_attr_ana_grpid.attr,
260260
&dev_attr_ana_state.attr,
261+
&dev_attr_queue_depth.attr,
261262
&dev_attr_numa_nodes.attr,
262263
#endif
263264
&dev_attr_io_passthru_err_log_enabled.attr,
@@ -291,7 +292,7 @@ static umode_t nvme_ns_attrs_are_visible(struct kobject *kobj,
291292
if (!nvme_ctrl_use_ana(nvme_get_ns_from_dev(dev)->ctrl))
292293
return 0;
293294
}
294-
if (a == &dev_attr_numa_nodes.attr) {
295+
if (a == &dev_attr_queue_depth.attr || a == &dev_attr_numa_nodes.attr) {
295296
if (nvme_disk_is_ns_head(dev_to_disk(dev)))
296297
return 0;
297298
}

0 commit comments

Comments
 (0)