Skip to content

Commit eded341

Browse files
Christoph Hellwigaxboe
authored andcommitted
block: don't decrement nr_phys_segments for physically contigous segments
Currently ll_merge_requests_fn, unlike all other merge functions, reduces nr_phys_segments by one if the last segment of the previous, and the first segment of the next segement are contigous. While this seems like a nice solution to avoid building smaller than possible requests it causes a mismatch between the segments actually present in the request and those iterated over by the bvec iterators, including __rq_for_each_bio. This can for example mistrigger the single segment optimization in the nvme-pci driver, and might lead to mismatching nr_phys_segments number when recalculating the number of request when inserting a cloned request. We could possibly work around this by making the bvec iterators take the front and back segment size into account, but that would require moving them from the bio to the bio_iter and spreading this mess over all users of bvecs. Or we could simply remove this optimization under the assumption that most users already build good enough bvecs, and that the bio merge patch never cared about this optimization either. The latter is what this patch does. dff824b ("nvme-pci: optimize mapping of small single segment requests"). Reviewed-by: Ming Lei <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
1 parent a0934fd commit eded341

File tree

1 file changed

+1
-22
lines changed

1 file changed

+1
-22
lines changed

block/blk-merge.c

Lines changed: 1 addition & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -358,7 +358,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
358358
unsigned front_seg_size;
359359
struct bio *fbio, *bbio;
360360
struct bvec_iter iter;
361-
bool new_bio = false;
362361

363362
if (!bio)
364363
return 0;
@@ -379,31 +378,12 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
379378
nr_phys_segs = 0;
380379
for_each_bio(bio) {
381380
bio_for_each_bvec(bv, bio, iter) {
382-
if (new_bio) {
383-
if (seg_size + bv.bv_len
384-
> queue_max_segment_size(q))
385-
goto new_segment;
386-
if (!biovec_phys_mergeable(q, &bvprv, &bv))
387-
goto new_segment;
388-
389-
seg_size += bv.bv_len;
390-
391-
if (nr_phys_segs == 1 && seg_size >
392-
front_seg_size)
393-
front_seg_size = seg_size;
394-
395-
continue;
396-
}
397-
new_segment:
398381
bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size,
399382
&front_seg_size, NULL, UINT_MAX);
400-
new_bio = false;
401383
}
402384
bbio = bio;
403-
if (likely(bio->bi_iter.bi_size)) {
385+
if (likely(bio->bi_iter.bi_size))
404386
bvprv = bv;
405-
new_bio = true;
406-
}
407387
}
408388

409389
fbio->bi_seg_front_size = front_seg_size;
@@ -725,7 +705,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
725705
req->bio->bi_seg_front_size = seg_size;
726706
if (next->nr_phys_segments == 1)
727707
next->biotail->bi_seg_back_size = seg_size;
728-
total_phys_segments--;
729708
}
730709

731710
if (total_phys_segments > queue_max_segments(q))

0 commit comments

Comments
 (0)