Skip to content

Commit 2e227ff

Browse files
Chanho Minakpm00
authored andcommitted
squashfs: add optional full compressed block caching
The commit 93e72b3 ("squashfs: migrate from ll_rw_block usage to BIO") removed caching of compressed blocks in SquashFS, causing fio performance regression in workloads with repeated file reads. Without caching, every read triggers disk I/O, severely impacting performance in tools like fio. This patch introduces a new CONFIG_SQUASHFS_COMP_CACHE_FULL Kconfig option to enable caching of all compressed blocks, restoring performance to pre-BIO migration levels. When enabled, all pages in a BIO are cached in the page cache, reducing disk I/O for repeated reads. The fio test results with this patch confirm the performance restoration: For example, fio tests (iodepth=1, numjobs=1, ioengine=psync) show a notable performance restoration: Disable CONFIG_SQUASHFS_COMP_CACHE_FULL: IOPS=815, BW=102MiB/s (107MB/s)(6113MiB/60001msec) Enable CONFIG_SQUASHFS_COMP_CACHE_FULL: IOPS=2223, BW=278MiB/s (291MB/s)(16.3GiB/59999msec) The tradeoff is increased memory usage due to caching all compressed blocks. The CONFIG_SQUASHFS_COMP_CACHE_FULL option allows users to enable this feature selectively, balancing performance and memory usage for workloads with frequent repeated reads. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Chanho Min <[email protected]> Reviewed-by Phillip Lougher <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 4496e1c commit 2e227ff

File tree

2 files changed

+49
-0
lines changed

2 files changed

+49
-0
lines changed

fs/squashfs/Kconfig

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,27 @@ config SQUASHFS_XATTR
149149

150150
If unsure, say N.
151151

152+
config SQUASHFS_COMP_CACHE_FULL
153+
bool "Enable full caching of compressed blocks"
154+
depends on SQUASHFS
155+
default n
156+
help
157+
This option enables caching of all compressed blocks, Without caching,
158+
repeated reads of the same files trigger excessive disk I/O, significantly
159+
reducinng performance in workloads like fio-based benchmarks.
160+
161+
For example, fio tests (iodepth=1, numjobs=1, ioengine=psync) show:
162+
With caching: IOPS=2223, BW=278MiB/s (291MB/s)
163+
Without caching: IOPS=815, BW=102MiB/s (107MB/s)
164+
165+
Enabling this option restores performance to pre-regression levels by
166+
caching all compressed blocks in the page cache, reducing disk I/O for
167+
repeated reads. However, this increases memory usage, which may be a
168+
concern in memory-constrained environments.
169+
170+
Enable this option if your workload involves frequent repeated reads and
171+
memory usage is not a limiting factor. If unsure, say N.
172+
152173
config SQUASHFS_ZLIB
153174
bool "Include support for ZLIB compressed file systems"
154175
depends on SQUASHFS

fs/squashfs/block.c

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,10 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
8888
struct bio_vec *bv;
8989
int idx = 0;
9090
int err = 0;
91+
#ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
92+
struct page **cache_pages = kmalloc_array(page_count,
93+
sizeof(void *), GFP_KERNEL | __GFP_ZERO);
94+
#endif
9195

9296
bio_for_each_segment_all(bv, fullbio, iter_all) {
9397
struct page *page = bv->bv_page;
@@ -110,6 +114,11 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
110114
head_to_cache = page;
111115
else if (idx == page_count - 1 && index + length != read_end)
112116
tail_to_cache = page;
117+
#ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
118+
/* Cache all pages in the BIO for repeated reads */
119+
else if (cache_pages)
120+
cache_pages[idx] = page;
121+
#endif
113122

114123
if (!bio || idx != end_idx) {
115124
struct bio *new = bio_alloc_clone(bdev, fullbio,
@@ -163,6 +172,25 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
163172
}
164173
}
165174

175+
#ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
176+
if (!cache_pages)
177+
goto out;
178+
179+
for (idx = 0; idx < page_count; idx++) {
180+
if (!cache_pages[idx])
181+
continue;
182+
int ret = add_to_page_cache_lru(cache_pages[idx], cache_mapping,
183+
(read_start >> PAGE_SHIFT) + idx,
184+
GFP_NOIO);
185+
186+
if (!ret) {
187+
SetPageUptodate(cache_pages[idx]);
188+
unlock_page(cache_pages[idx]);
189+
}
190+
}
191+
kfree(cache_pages);
192+
out:
193+
#endif
166194
return 0;
167195
}
168196

0 commit comments

Comments
 (0)