Skip to content

Commit de1fafa

Browse files
Coly Liaxboe
authored andcommitted
bcache: introduce meta_bucket_pages() related helper routines
Currently the in-memory meta data like c->uuids or c->disk_buckets are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages() calls __get_free_pages() to allocated continuous pages with order indicated by ilog2(bucket_pages(c)), #define alloc_bucket_pages(gfp, c) \ ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c)))) The maximum order is defined as MAX_ORDER, the default value is 11 (and can be overwritten by CONFIG_FORCE_MAX_ZONEORDER). In bcache code the maximum bucket size width is 16bits, this is restricted both by KEY_SIZE size and bucket_size size from struct cache_sb_disk. The maximum 16bits width and power-of-2 value is (1<<15) in unit of sector (512byte). It means the maximum value of bucket size in bytes is (1<<24) bytes a.k.a 4096 pages. When the bucket size is set to maximum permitted value, ilog2(4096) is 12, which exceeds the default maximum order __get_free_pages() can accepted, the failed pages allocation will fail cache set registration procedure and print a kernel oops message for the exceeded pages order. This patch introduces meta_bucket_pages(), meta_bucket_bytes(), and alloc_bucket_pages() helper routines. meta_bucket_pages() indicates the maximum pages can be allocated to meta data bucket, meta_bucket_bytes() indicates the according maximum bytes, and alloc_bucket_pages() does the pages allocation for meta bucket. Because meta_bucket_pages() chooses the smaller value among the bucket size and MAX_ORDER_NR_PAGES, it still works when MAX_ORDER overwritten by CONFIG_FORCE_MAX_ZONEORDER. Following patches will use these helper routines to decide maximum pages can be allocated for different meta data buckets. If the bucket size is larger than meta_bucket_bytes(), the bcache registration can continue to success, just the space more than meta_bucket_bytes() inside the bucket is wasted. Comparing bcache failed for large bucket size, wasting some space for meta data buckets is acceptable at this moment. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
1 parent 4c1ccd0 commit de1fafa

File tree

2 files changed

+23
-0
lines changed

2 files changed

+23
-0
lines changed

drivers/md/bcache/bcache.h

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -762,6 +762,26 @@ struct bbio {
762762
#define bucket_bytes(c) ((c)->sb.bucket_size << 9)
763763
#define block_bytes(c) ((c)->sb.block_size << 9)
764764

765+
static inline unsigned int meta_bucket_pages(struct cache_sb *sb)
766+
{
767+
unsigned int n, max_pages;
768+
769+
max_pages = min_t(unsigned int,
770+
__rounddown_pow_of_two(USHRT_MAX) / PAGE_SECTORS,
771+
MAX_ORDER_NR_PAGES);
772+
773+
n = sb->bucket_size / PAGE_SECTORS;
774+
if (n > max_pages)
775+
n = max_pages;
776+
777+
return n;
778+
}
779+
780+
static inline unsigned int meta_bucket_bytes(struct cache_sb *sb)
781+
{
782+
return meta_bucket_pages(sb) << PAGE_SHIFT;
783+
}
784+
765785
#define prios_per_bucket(c) \
766786
((bucket_bytes(c) - sizeof(struct prio_set)) / \
767787
sizeof(struct bucket_disk))

drivers/md/bcache/super.c

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1829,6 +1829,9 @@ void bch_cache_set_unregister(struct cache_set *c)
18291829
#define alloc_bucket_pages(gfp, c) \
18301830
((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
18311831

1832+
#define alloc_meta_bucket_pages(gfp, sb) \
1833+
((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(meta_bucket_pages(sb))))
1834+
18321835
struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
18331836
{
18341837
int iter_size;

0 commit comments

Comments
 (0)