Skip to content

Commit 6f019c0

Browse files
adam900710kdave
authored andcommitted
btrfs: fix a out-of-bound access in copy_compressed_data_to_page()
[BUG] The following script can cause btrfs to crash: $ mount -o compress-force=lzo $DEV /mnt $ dd if=/dev/urandom of=/mnt/foo bs=4k count=1 $ sync The call trace looks like this: general protection fault, probably for non-canonical address 0xe04b37fccce3b000: 0000 [#1] PREEMPT SMP NOPTI CPU: 5 PID: 164 Comm: kworker/u20:3 Not tainted 5.15.0-rc7-custom+ #4 Workqueue: btrfs-delalloc btrfs_work_helper [btrfs] RIP: 0010:__memcpy+0x12/0x20 Call Trace: lzo_compress_pages+0x236/0x540 [btrfs] btrfs_compress_pages+0xaa/0xf0 [btrfs] compress_file_range+0x431/0x8e0 [btrfs] async_cow_start+0x12/0x30 [btrfs] btrfs_work_helper+0xf6/0x3e0 [btrfs] process_one_work+0x294/0x5d0 worker_thread+0x55/0x3c0 kthread+0x140/0x170 ret_from_fork+0x22/0x30 ---[ end trace 63c3c0f131e61982 ]--- [CAUSE] In lzo_compress_pages(), parameter @out_pages is not only an output parameter (for the number of compressed pages), but also an input parameter, as the upper limit of compressed pages we can utilize. In commit d408880 ("btrfs: subpage: make lzo_compress_pages() compatible"), the refactoring doesn't take @out_pages as an input, thus completely ignoring the limit. And for compress-force case, we could hit incompressible data that compressed size would go beyond the page limit, and cause the above crash. [FIX] Save @out_pages as @max_nr_page, and pass it to lzo_compress_pages(), and check if we're beyond the limit before accessing the pages. Note: this also fixes crash on 32bit architectures that was suspected to be caused by merge of btrfs patches to 5.16-rc1. Reported in https://lore.kernel.org/all/[email protected]/ . Reported-by: Omar Sandoval <[email protected]> Fixes: d408880 ("btrfs: subpage: make lzo_compress_pages() compatible") Reviewed-by: Omar Sandoval <[email protected]> Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> [ add note ] Signed-off-by: David Sterba <[email protected]>
1 parent d1ed82f commit 6f019c0

File tree

1 file changed

+11
-1
lines changed

1 file changed

+11
-1
lines changed

fs/btrfs/lzo.c

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,13 +125,17 @@ static inline size_t read_compress_length(const char *buf)
125125
static int copy_compressed_data_to_page(char *compressed_data,
126126
size_t compressed_size,
127127
struct page **out_pages,
128+
unsigned long max_nr_page,
128129
u32 *cur_out,
129130
const u32 sectorsize)
130131
{
131132
u32 sector_bytes_left;
132133
u32 orig_out;
133134
struct page *cur_page;
134135

136+
if ((*cur_out / PAGE_SIZE) >= max_nr_page)
137+
return -E2BIG;
138+
135139
/*
136140
* We never allow a segment header crossing sector boundary, previous
137141
* run should ensure we have enough space left inside the sector.
@@ -158,6 +162,9 @@ static int copy_compressed_data_to_page(char *compressed_data,
158162
u32 copy_len = min_t(u32, sectorsize - *cur_out % sectorsize,
159163
orig_out + compressed_size - *cur_out);
160164

165+
if ((*cur_out / PAGE_SIZE) >= max_nr_page)
166+
return -E2BIG;
167+
161168
cur_page = out_pages[*cur_out / PAGE_SIZE];
162169
/* Allocate a new page */
163170
if (!cur_page) {
@@ -195,13 +202,15 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
195202
struct workspace *workspace = list_entry(ws, struct workspace, list);
196203
const u32 sectorsize = btrfs_sb(mapping->host->i_sb)->sectorsize;
197204
struct page *page_in = NULL;
205+
const unsigned long max_nr_page = *out_pages;
198206
int ret = 0;
199207
/* Points to the file offset of input data */
200208
u64 cur_in = start;
201209
/* Points to the current output byte */
202210
u32 cur_out = 0;
203211
u32 len = *total_out;
204212

213+
ASSERT(max_nr_page > 0);
205214
*out_pages = 0;
206215
*total_out = 0;
207216
*total_in = 0;
@@ -237,7 +246,8 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
237246
}
238247

239248
ret = copy_compressed_data_to_page(workspace->cbuf, out_len,
240-
pages, &cur_out, sectorsize);
249+
pages, max_nr_page,
250+
&cur_out, sectorsize);
241251
if (ret < 0)
242252
goto out;
243253

0 commit comments

Comments
 (0)