Skip to content

Commit 9da3d1e

Browse files
johnpgarryaxboe
authored andcommitted
block: Add core atomic write support
Add atomic write support, as follows: - add helper functions to get request_queue atomic write limits - report request_queue atomic write support limits to sysfs and update Doc - support to safely merge atomic writes - deal with splitting atomic writes - misc helper functions - add a per-request atomic write flag New request_queue limits are added, as follows: - atomic_write_hw_max is set by the block driver and is the maximum length of an atomic write which the device may support. It is not necessarily a power-of-2. - atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and max_hw_sectors. It is always a power-of-2. Atomic writes may be merged, and atomic_write_max_sectors would be the limit on a merged atomic write request size. This value is not capped at max_sectors, as the value in max_sectors can be controlled from userspace, and it would only cause trouble if userspace could limit atomic_write_unit_max_bytes and the other atomic write limits. - atomic_write_hw_unit_{min,max} are set by the block driver and are the min/max length of an atomic write unit which the device may support. They both must be a power-of-2. Typically atomic_write_hw_unit_max will hold the same value as atomic_write_hw_max. - atomic_write_unit_{min,max} are derived from atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits. Both min and max values must be a power-of-2. - atomic_write_hw_boundary is set by the block driver. If non-zero, it indicates an LBA space boundary at which an atomic write straddles no longer is atomically executed by the disk. The value must be a power-of-2. Note that it would be acceptable to enforce a rule that atomic_write_hw_boundary_sectors is a multiple of atomic_write_hw_unit_max, but the resultant code would be more complicated. All atomic writes limits are by default set 0 to indicate no atomic write support. Even though it is assumed by Linux that a logical block can always be atomically written, we ignore this as it is not of particular interest. Stacked devices are just not supported either for now. An atomic write must always be submitted to the block driver as part of a single request. As such, only a single BIO must be submitted to the block layer for an atomic write. When a single atomic write BIO is submitted, it cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited by the maximum guaranteed BIO size which will not be required to be split. This max size is calculated by request_queue max segments and the number of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each segment containing PAGE_SIZE of data, apart from the first+last, which each can fit logical block size of data. The first+last will be LBS length/aligned as we rely on direct IO alignment rules also. New sysfs files are added to report the following atomic write limits: - atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in bytes - atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in bytes - atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in bytes - atomic_write_max_bytes - same as atomic_write_max_sectors in bytes Atomic writes may only be merged with other atomic writes and only under the following conditions: - total resultant request length <= atomic_write_max_bytes - the merged write does not straddle a boundary Helper function bdev_can_atomic_write() is added to indicate whether atomic writes may be issued to a bdev. If a bdev is a partition, the partition start must be aligned with both atomic_write_unit_min_sectors and atomic_write_hw_boundary_sectors. FSes will rely on the block layer to validate that an atomic write BIO submitted will be of valid size, so add blk_validate_atomic_write_op_size() for this purpose. Userspace expects an atomic write which is of invalid size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use BLK_STS_INVAL for when a BIO needs to be split, as this should mean an invalid size BIO. Flag REQ_ATOMIC is used for indicating an atomic write. Co-developed-by: Himanshu Madhani <[email protected]> Signed-off-by: Himanshu Madhani <[email protected]> Reviewed-by: Martin K. Petersen <[email protected]> Signed-off-by: John Garry <[email protected]> Reviewed-by: Keith Busch <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
1 parent 0f9ca80 commit 9da3d1e

File tree

8 files changed

+304
-5
lines changed

8 files changed

+304
-5
lines changed

Documentation/ABI/stable/sysfs-block

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,59 @@ Description:
2121
device is offset from the internal allocation unit's
2222
natural alignment.
2323

24+
What: /sys/block/<disk>/atomic_write_max_bytes
25+
Date: February 2024
26+
Contact: Himanshu Madhani <[email protected]>
27+
Description:
28+
[RO] This parameter specifies the maximum atomic write
29+
size reported by the device. This parameter is relevant
30+
for merging of writes, where a merged atomic write
31+
operation must not exceed this number of bytes.
32+
This parameter may be greater than the value in
33+
atomic_write_unit_max_bytes as
34+
atomic_write_unit_max_bytes will be rounded down to a
35+
power-of-two and atomic_write_unit_max_bytes may also be
36+
limited by some other queue limits, such as max_segments.
37+
This parameter - along with atomic_write_unit_min_bytes
38+
and atomic_write_unit_max_bytes - will not be larger than
39+
max_hw_sectors_kb, but may be larger than max_sectors_kb.
40+
41+
42+
What: /sys/block/<disk>/atomic_write_unit_min_bytes
43+
Date: February 2024
44+
Contact: Himanshu Madhani <[email protected]>
45+
Description:
46+
[RO] This parameter specifies the smallest block which can
47+
be written atomically with an atomic write operation. All
48+
atomic write operations must begin at a
49+
atomic_write_unit_min boundary and must be multiples of
50+
atomic_write_unit_min. This value must be a power-of-two.
51+
52+
53+
What: /sys/block/<disk>/atomic_write_unit_max_bytes
54+
Date: February 2024
55+
Contact: Himanshu Madhani <[email protected]>
56+
Description:
57+
[RO] This parameter defines the largest block which can be
58+
written atomically with an atomic write operation. This
59+
value must be a multiple of atomic_write_unit_min and must
60+
be a power-of-two. This value will not be larger than
61+
atomic_write_max_bytes.
62+
63+
64+
What: /sys/block/<disk>/atomic_write_boundary_bytes
65+
Date: February 2024
66+
Contact: Himanshu Madhani <[email protected]>
67+
Description:
68+
[RO] A device may need to internally split an atomic write I/O
69+
which straddles a given logical block address boundary. This
70+
parameter specifies the size in bytes of the atomic boundary if
71+
one is reported by the device. This value must be a
72+
power-of-two and at least the size as in
73+
atomic_write_unit_max_bytes.
74+
Any attempt to merge atomic write I/Os must not result in a
75+
merged I/O which crosses this boundary (if any).
76+
2477

2578
What: /sys/block/<disk>/diskseq
2679
Date: February 2021

block/blk-core.c

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,8 @@ static const struct {
174174
/* Command duration limit device-side timeout */
175175
[BLK_STS_DURATION_LIMIT] = { -ETIME, "duration limit exceeded" },
176176

177+
[BLK_STS_INVAL] = { -EINVAL, "invalid" },
178+
177179
/* everything else not covered above: */
178180
[BLK_STS_IOERR] = { -EIO, "I/O" },
179181
};
@@ -739,6 +741,18 @@ void submit_bio_noacct_nocheck(struct bio *bio)
739741
__submit_bio_noacct(bio);
740742
}
741743

744+
static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
745+
struct bio *bio)
746+
{
747+
if (bio->bi_iter.bi_size > queue_atomic_write_unit_max_bytes(q))
748+
return BLK_STS_INVAL;
749+
750+
if (bio->bi_iter.bi_size % queue_atomic_write_unit_min_bytes(q))
751+
return BLK_STS_INVAL;
752+
753+
return BLK_STS_OK;
754+
}
755+
742756
/**
743757
* submit_bio_noacct - re-submit a bio to the block device layer for I/O
744758
* @bio: The bio describing the location in memory and on the device.
@@ -797,6 +811,11 @@ void submit_bio_noacct(struct bio *bio)
797811
switch (bio_op(bio)) {
798812
case REQ_OP_READ:
799813
case REQ_OP_WRITE:
814+
if (bio->bi_opf & REQ_ATOMIC) {
815+
status = blk_validate_atomic_write_op_size(q, bio);
816+
if (status != BLK_STS_OK)
817+
goto end_io;
818+
}
800819
break;
801820
case REQ_OP_FLUSH:
802821
/*

block/blk-merge.c

Lines changed: 46 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -154,8 +154,16 @@ static struct bio *bio_split_write_zeroes(struct bio *bio,
154154
return bio_split(bio, lim->max_write_zeroes_sectors, GFP_NOIO, bs);
155155
}
156156

157-
static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim)
157+
static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
158+
bool is_atomic)
158159
{
160+
/*
161+
* chunk_sectors must be a multiple of atomic_write_boundary_sectors if
162+
* both non-zero.
163+
*/
164+
if (is_atomic && lim->atomic_write_boundary_sectors)
165+
return lim->atomic_write_boundary_sectors;
166+
159167
return lim->chunk_sectors;
160168
}
161169

@@ -172,8 +180,18 @@ static inline unsigned get_max_io_size(struct bio *bio,
172180
{
173181
unsigned pbs = lim->physical_block_size >> SECTOR_SHIFT;
174182
unsigned lbs = lim->logical_block_size >> SECTOR_SHIFT;
175-
unsigned boundary_sectors = blk_boundary_sectors(lim);
176-
unsigned max_sectors = lim->max_sectors, start, end;
183+
bool is_atomic = bio->bi_opf & REQ_ATOMIC;
184+
unsigned boundary_sectors = blk_boundary_sectors(lim, is_atomic);
185+
unsigned max_sectors, start, end;
186+
187+
/*
188+
* We ignore lim->max_sectors for atomic writes because it may less
189+
* than the actual bio size, which we cannot tolerate.
190+
*/
191+
if (is_atomic)
192+
max_sectors = lim->atomic_write_max_sectors;
193+
else
194+
max_sectors = lim->max_sectors;
177195

178196
if (boundary_sectors) {
179197
max_sectors = min(max_sectors,
@@ -311,6 +329,11 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
311329
*segs = nsegs;
312330
return NULL;
313331
split:
332+
if (bio->bi_opf & REQ_ATOMIC) {
333+
bio->bi_status = BLK_STS_INVAL;
334+
bio_endio(bio);
335+
return ERR_PTR(-EINVAL);
336+
}
314337
/*
315338
* We can't sanely support splitting for a REQ_NOWAIT bio. End it
316339
* with EAGAIN if splitting is required and return an error pointer.
@@ -596,11 +619,12 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
596619
struct request_queue *q = rq->q;
597620
struct queue_limits *lim = &q->limits;
598621
unsigned int max_sectors, boundary_sectors;
622+
bool is_atomic = rq->cmd_flags & REQ_ATOMIC;
599623

600624
if (blk_rq_is_passthrough(rq))
601625
return q->limits.max_hw_sectors;
602626

603-
boundary_sectors = blk_boundary_sectors(lim);
627+
boundary_sectors = blk_boundary_sectors(lim, is_atomic);
604628
max_sectors = blk_queue_get_max_sectors(rq);
605629

606630
if (!boundary_sectors ||
@@ -806,6 +830,18 @@ static enum elv_merge blk_try_req_merge(struct request *req,
806830
return ELEVATOR_NO_MERGE;
807831
}
808832

833+
static bool blk_atomic_write_mergeable_rq_bio(struct request *rq,
834+
struct bio *bio)
835+
{
836+
return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC);
837+
}
838+
839+
static bool blk_atomic_write_mergeable_rqs(struct request *rq,
840+
struct request *next)
841+
{
842+
return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC);
843+
}
844+
809845
/*
810846
* For non-mq, this has to be called with the request spinlock acquired.
811847
* For mq with scheduling, the appropriate queue wide lock should be held.
@@ -829,6 +865,9 @@ static struct request *attempt_merge(struct request_queue *q,
829865
if (req->ioprio != next->ioprio)
830866
return NULL;
831867

868+
if (!blk_atomic_write_mergeable_rqs(req, next))
869+
return NULL;
870+
832871
/*
833872
* If we are allowed to merge, then append bio list
834873
* from next to rq and release next. merge_requests_fn
@@ -960,6 +999,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
960999
if (rq->ioprio != bio_prio(bio))
9611000
return false;
9621001

1002+
if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false)
1003+
return false;
1004+
9631005
return true;
9641006
}
9651007

block/blk-settings.c

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,92 @@ static int blk_validate_integrity_limits(struct queue_limits *lim)
135135
return 0;
136136
}
137137

138+
/*
139+
* Returns max guaranteed bytes which we can fit in a bio.
140+
*
141+
* We request that an atomic_write is ITER_UBUF iov_iter (so a single vector),
142+
* so we assume that we can fit in at least PAGE_SIZE in a segment, apart from
143+
* the first and last segments.
144+
*/
145+
static
146+
unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *lim)
147+
{
148+
unsigned int max_segments = min(BIO_MAX_VECS, lim->max_segments);
149+
unsigned int length;
150+
151+
length = min(max_segments, 2) * lim->logical_block_size;
152+
if (max_segments > 2)
153+
length += (max_segments - 2) * PAGE_SIZE;
154+
155+
return length;
156+
}
157+
158+
static void blk_atomic_writes_update_limits(struct queue_limits *lim)
159+
{
160+
unsigned int unit_limit = min(lim->max_hw_sectors << SECTOR_SHIFT,
161+
blk_queue_max_guaranteed_bio(lim));
162+
163+
unit_limit = rounddown_pow_of_two(unit_limit);
164+
165+
lim->atomic_write_max_sectors =
166+
min(lim->atomic_write_hw_max >> SECTOR_SHIFT,
167+
lim->max_hw_sectors);
168+
lim->atomic_write_unit_min =
169+
min(lim->atomic_write_hw_unit_min, unit_limit);
170+
lim->atomic_write_unit_max =
171+
min(lim->atomic_write_hw_unit_max, unit_limit);
172+
lim->atomic_write_boundary_sectors =
173+
lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
174+
}
175+
176+
static void blk_validate_atomic_write_limits(struct queue_limits *lim)
177+
{
178+
unsigned int chunk_sectors = lim->chunk_sectors;
179+
unsigned int boundary_sectors;
180+
181+
if (!lim->atomic_write_hw_max)
182+
goto unsupported;
183+
184+
boundary_sectors = lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
185+
186+
if (boundary_sectors) {
187+
/*
188+
* A feature of boundary support is that it disallows bios to
189+
* be merged which would result in a merged request which
190+
* crosses either a chunk sector or atomic write HW boundary,
191+
* even though chunk sectors may be just set for performance.
192+
* For simplicity, disallow atomic writes for a chunk sector
193+
* which is non-zero and smaller than atomic write HW boundary.
194+
* Furthermore, chunk sectors must be a multiple of atomic
195+
* write HW boundary. Otherwise boundary support becomes
196+
* complicated.
197+
* Devices which do not conform to these rules can be dealt
198+
* with if and when they show up.
199+
*/
200+
if (WARN_ON_ONCE(do_div(chunk_sectors, boundary_sectors)))
201+
goto unsupported;
202+
203+
/*
204+
* The boundary size just needs to be a multiple of unit_max
205+
* (and not necessarily a power-of-2), so this following check
206+
* could be relaxed in future.
207+
* Furthermore, if needed, unit_max could even be reduced so
208+
* that it is compliant with a !power-of-2 boundary.
209+
*/
210+
if (!is_power_of_2(boundary_sectors))
211+
goto unsupported;
212+
}
213+
214+
blk_atomic_writes_update_limits(lim);
215+
return;
216+
217+
unsupported:
218+
lim->atomic_write_max_sectors = 0;
219+
lim->atomic_write_boundary_sectors = 0;
220+
lim->atomic_write_unit_min = 0;
221+
lim->atomic_write_unit_max = 0;
222+
}
223+
138224
/*
139225
* Check that the limits in lim are valid, initialize defaults for unset
140226
* values, and cap values based on others where needed.
@@ -272,6 +358,8 @@ static int blk_validate_limits(struct queue_limits *lim)
272358
if (!(lim->features & BLK_FEAT_WRITE_CACHE))
273359
lim->features &= ~BLK_FEAT_FUA;
274360

361+
blk_validate_atomic_write_limits(lim);
362+
275363
err = blk_validate_integrity_limits(lim);
276364
if (err)
277365
return err;

block/blk-sysfs.c

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q,
118118
return queue_var_show(queue_max_discard_segments(q), page);
119119
}
120120

121+
static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q,
122+
char *page)
123+
{
124+
return queue_var_show(queue_atomic_write_max_bytes(q), page);
125+
}
126+
127+
static ssize_t queue_atomic_write_boundary_show(struct request_queue *q,
128+
char *page)
129+
{
130+
return queue_var_show(queue_atomic_write_boundary_bytes(q), page);
131+
}
132+
133+
static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q,
134+
char *page)
135+
{
136+
return queue_var_show(queue_atomic_write_unit_min_bytes(q), page);
137+
}
138+
139+
static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q,
140+
char *page)
141+
{
142+
return queue_var_show(queue_atomic_write_unit_max_bytes(q), page);
143+
}
144+
121145
static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page)
122146
{
123147
return queue_var_show(q->limits.max_integrity_segments, page);
@@ -505,6 +529,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes");
505529
QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes");
506530
QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
507531

532+
QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes");
533+
QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes");
534+
QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
535+
QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
536+
508537
QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
509538
QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes");
510539
QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes");
@@ -626,6 +655,10 @@ static struct attribute *queue_attrs[] = {
626655
&queue_discard_max_entry.attr,
627656
&queue_discard_max_hw_entry.attr,
628657
&queue_discard_zeroes_data_entry.attr,
658+
&queue_atomic_write_max_bytes_entry.attr,
659+
&queue_atomic_write_boundary_entry.attr,
660+
&queue_atomic_write_unit_min_entry.attr,
661+
&queue_atomic_write_unit_max_entry.attr,
629662
&queue_write_same_max_entry.attr,
630663
&queue_write_zeroes_max_entry.attr,
631664
&queue_zone_append_max_entry.attr,

block/blk.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -194,6 +194,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
194194
if (unlikely(op == REQ_OP_WRITE_ZEROES))
195195
return q->limits.max_write_zeroes_sectors;
196196

197+
if (rq->cmd_flags & REQ_ATOMIC)
198+
return q->limits.atomic_write_max_sectors;
199+
197200
return q->limits.max_sectors;
198201
}
199202

include/linux/blk_types.h

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,11 @@ typedef u16 blk_short_t;
162162
*/
163163
#define BLK_STS_DURATION_LIMIT ((__force blk_status_t)17)
164164

165+
/*
166+
* Invalid size or alignment.
167+
*/
168+
#define BLK_STS_INVAL ((__force blk_status_t)19)
169+
165170
/**
166171
* blk_path_error - returns true if error may be path related
167172
* @error: status the request was completed with
@@ -370,7 +375,7 @@ enum req_flag_bits {
370375
__REQ_SWAP, /* swap I/O */
371376
__REQ_DRV, /* for driver use */
372377
__REQ_FS_PRIVATE, /* for file system (submitter) use */
373-
378+
__REQ_ATOMIC, /* for atomic write operations */
374379
/*
375380
* Command specific flags, keep last:
376381
*/
@@ -402,6 +407,7 @@ enum req_flag_bits {
402407
#define REQ_SWAP (__force blk_opf_t)(1ULL << __REQ_SWAP)
403408
#define REQ_DRV (__force blk_opf_t)(1ULL << __REQ_DRV)
404409
#define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE)
410+
#define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC)
405411

406412
#define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP)
407413

0 commit comments

Comments
 (0)