Skip to content

Commit 8f9da81

Browse files
committed
btrfs: speed up btrfs_get_token_##bits helpers
The set/get token helpers either use the cached address in the token or unconditionally call map_private_extent_buffer to get the address of page containing the requested offset plus the mapping start and length. Depending on the return value, the fast path uses unaligned read to get data within a page, or fall back to read_extent_buffer that can handle reads spanning more pages. This is all wasteful. We know the number of bytes to read, 1/2/4/8 and can find out the page. Then simply check if it's contained or the fallback is needed. The token address is updated to the page, or the on the next index, expecting that the next read will use that. This saves one function call to map_private_extent_buffer and several unnecessary temporary variables. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
1 parent 1441ed9 commit 8f9da81

File tree

1 file changed

+16
-27
lines changed

1 file changed

+16
-27
lines changed

fs/btrfs/struct-funcs.c

Lines changed: 16 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -62,39 +62,28 @@ static bool check_setget_bounds(const struct extent_buffer *eb,
6262
u##bits btrfs_get_token_##bits(struct btrfs_map_token *token, \
6363
const void *ptr, unsigned long off) \
6464
{ \
65-
unsigned long part_offset = (unsigned long)ptr; \
66-
unsigned long offset = part_offset + off; \
67-
void *p; \
68-
int err; \
69-
char *kaddr; \
70-
unsigned long map_start; \
71-
unsigned long map_len; \
72-
int size = sizeof(u##bits); \
73-
u##bits res; \
65+
const unsigned long member_offset = (unsigned long)ptr + off; \
66+
const unsigned long idx = member_offset >> PAGE_SHIFT; \
67+
const unsigned long oip = offset_in_page(member_offset); \
68+
const int size = sizeof(u##bits); \
69+
__le##bits leres; \
7470
\
7571
ASSERT(token); \
7672
ASSERT(token->kaddr); \
7773
ASSERT(check_setget_bounds(token->eb, ptr, off, size)); \
78-
if (token->offset <= offset && \
79-
(token->offset + PAGE_SIZE >= offset + size)) { \
80-
kaddr = token->kaddr; \
81-
p = kaddr + part_offset - token->offset; \
82-
res = get_unaligned_le##bits(p + off); \
83-
return res; \
74+
if (token->offset <= member_offset && \
75+
member_offset + size <= token->offset + PAGE_SIZE) { \
76+
return get_unaligned_le##bits(token->kaddr + oip); \
8477
} \
85-
err = map_private_extent_buffer(token->eb, offset, size, \
86-
&kaddr, &map_start, &map_len); \
87-
if (err) { \
88-
__le##bits leres; \
89-
\
90-
read_extent_buffer(token->eb, &leres, offset, size); \
91-
return le##bits##_to_cpu(leres); \
78+
if (oip + size <= PAGE_SIZE) { \
79+
token->kaddr = page_address(token->eb->pages[idx]); \
80+
token->offset = idx << PAGE_SHIFT; \
81+
return get_unaligned_le##bits(token->kaddr + oip); \
9282
} \
93-
p = kaddr + part_offset - map_start; \
94-
res = get_unaligned_le##bits(p + off); \
95-
token->kaddr = kaddr; \
96-
token->offset = map_start; \
97-
return res; \
83+
token->kaddr = page_address(token->eb->pages[idx + 1]); \
84+
token->offset = (idx + 1) << PAGE_SHIFT; \
85+
read_extent_buffer(token->eb, &leres, member_offset, size); \
86+
return le##bits##_to_cpu(leres); \
9887
} \
9988
u##bits btrfs_get_##bits(const struct extent_buffer *eb, \
10089
const void *ptr, unsigned long off) \

0 commit comments

Comments
 (0)