Skip to content

Commit 5f9bbea

Browse files
alanpeteradaxboe
authored andcommitted
nvme: Atomic write support
Add support to set block layer request_queue atomic write limits. The limits will be derived from either the namespace or controller atomic parameters. NVMe atomic-related parameters are grouped into "normal" and "power-fail" (or PF) class of parameter. For atomic write support, only PF parameters are of interest. The "normal" parameters are concerned with racing reads and writes (which also applies to PF). See NVM Command Set Specification Revision 1.0d section 2.1.4 for reference. Whether to use per namespace or controller atomic parameters is decided by NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data Structure, NVM Command Set. NVMe namespaces may define an atomic boundary, whereby no atomic guarantees are provided for a write which straddles this per-lba space boundary. The block layer merging policy is such that no merges may occur in which the resultant request would straddle such a boundary. Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from atomic boundary rule. In addition, again unlike SCSI, there is no dedicated atomic write command - a write which adheres to the atomic size limit and boundary is implicitly atomic. If NSFEAT bit 1 is set, the following parameters are of interest: - NAWUPF (Namespace Atomic Write Unit Power Fail) - NABSPF (Namespace Atomic Boundary Size Power Fail) - NABO (Namespace Atomic Boundary Offset) and we set request_queue limits as follows: - atomic_write_unit_max = rounddown_pow_of_two(NAWUPF) - atomic_write_max_bytes = NAWUPF - atomic_write_boundary = NABSPF If in the unlikely scenario that NABO is non-zero, then atomic writes will not be supported at all as dealing with this adds extra complexity. This policy may change in future. In all cases, atomic_write_unit_min is set to the logical block size. If NSFEAT bit 1 is unset, the following parameter is of interest: - AWUPF (Atomic Write Unit Power Fail) and we set request_queue limits as follows: - atomic_write_unit_max = rounddown_pow_of_two(AWUPF) - atomic_write_max_bytes = AWUPF - atomic_write_boundary = 0 A new function, nvme_valid_atomic_write(), is also called from submission path to verify that a request has been submitted to the driver will actually be executed atomically. As mentioned, there is no dedicated NVMe atomic write command (which may error for a command which exceeds the controller atomic write limits). Note on NABSPF: There seems to be some vagueness in the spec as to whether NABSPF applies for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF and how it is affected by bit 1. However Figure 4 does tell to check Figure 97 for info about per-namespace parameters, which NABSPF is, so it is implied. However currently nvme_update_disk_info() does check namespace parameter NABO regardless of this bit. Signed-off-by: Alan Adamson <[email protected]> Reviewed-by: Keith Busch <[email protected]> Reviewed-by: Martin K. Petersen <[email protected]> jpg: total rewrite Signed-off-by: John Garry <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
1 parent 84f3a3c commit 5f9bbea

File tree

1 file changed

+52
-0
lines changed

1 file changed

+52
-0
lines changed

drivers/nvme/host/core.c

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -927,6 +927,36 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
927927
return BLK_STS_OK;
928928
}
929929

930+
/*
931+
* NVMe does not support a dedicated command to issue an atomic write. A write
932+
* which does adhere to the device atomic limits will silently be executed
933+
* non-atomically. The request issuer should ensure that the write is within
934+
* the queue atomic writes limits, but just validate this in case it is not.
935+
*/
936+
static bool nvme_valid_atomic_write(struct request *req)
937+
{
938+
struct request_queue *q = req->q;
939+
u32 boundary_bytes = queue_atomic_write_boundary_bytes(q);
940+
941+
if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q))
942+
return false;
943+
944+
if (boundary_bytes) {
945+
u64 mask = boundary_bytes - 1, imask = ~mask;
946+
u64 start = blk_rq_pos(req) << SECTOR_SHIFT;
947+
u64 end = start + blk_rq_bytes(req) - 1;
948+
949+
/* If greater then must be crossing a boundary */
950+
if (blk_rq_bytes(req) > boundary_bytes)
951+
return false;
952+
953+
if ((start & imask) != (end & imask))
954+
return false;
955+
}
956+
957+
return true;
958+
}
959+
930960
static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
931961
struct request *req, struct nvme_command *cmnd,
932962
enum nvme_opcode op)
@@ -942,6 +972,9 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
942972
if (req->cmd_flags & REQ_RAHEAD)
943973
dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
944974

975+
if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req))
976+
return BLK_STS_INVAL;
977+
945978
cmnd->rw.opcode = op;
946979
cmnd->rw.flags = 0;
947980
cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id);
@@ -1920,6 +1953,23 @@ static void nvme_configure_metadata(struct nvme_ctrl *ctrl,
19201953
}
19211954
}
19221955

1956+
1957+
static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns,
1958+
struct nvme_id_ns *id, struct queue_limits *lim,
1959+
u32 bs, u32 atomic_bs)
1960+
{
1961+
unsigned int boundary = 0;
1962+
1963+
if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
1964+
if (le16_to_cpu(id->nabspf))
1965+
boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
1966+
}
1967+
lim->atomic_write_hw_max = atomic_bs;
1968+
lim->atomic_write_hw_boundary = boundary;
1969+
lim->atomic_write_hw_unit_min = bs;
1970+
lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs);
1971+
}
1972+
19231973
static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl)
19241974
{
19251975
return ctrl->max_hw_sectors / (NVME_CTRL_PAGE_SIZE >> SECTOR_SHIFT) + 1;
@@ -1966,6 +2016,8 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
19662016
atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
19672017
else
19682018
atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;
2019+
2020+
nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs);
19692021
}
19702022

19712023
if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {

0 commit comments

Comments
 (0)