Skip to content

Commit b912061

Browse files
committed
Merge series "SLUB percpu sheaves"
This series adds an opt-in percpu array-based caching layer to SLUB. It has evolved to a state where kmem caches with sheaves are compatible with all SLUB features (slub_debug, SLUB_TINY, NUMA locality considerations). The plan is therefore that it will be later enabled for all kmem caches and replace the complicated cpu (partial) slabs code. Note the name "sheaf" was invented by Matthew Wilcox so we don't call the arrays magazines like the original Bonwick paper. The per-NUMA-node cache of sheaves is thus called "barn". This caching may seem similar to the arrays we had in SLAB, but there are some important differences: - deals differently with NUMA locality of freed objects, thus there are no per-node "shared" arrays (with possible lock contention) and no "alien" arrays that would need periodical flushing - instead, freeing remote objects (which is rare) bypasses the sheaves - percpu sheaves thus contain only local objects (modulo rare races and local node exhaustion) - NUMA restricted allocations and strict_numa mode is still honoured - improves kfree_rcu() handling by reusing whole sheaves - there is an API for obtaining a preallocated sheaf that can be used for guaranteed and efficient allocations in a restricted context, when the upper bound for needed objects is known but rarely reached - opt-in, not used for every cache (for now) The motivation comes mainly from the ongoing work related to VMA locking scalability and the related maple tree operations. This is why VMA and maple nodes caches are sheaf-enabled in the patchset. A sheaf-enabled cache has the following expected advantages: - Cheaper fast paths. For allocations, instead of local double cmpxchg, thanks to local_trylock() it becomes a preempt_disable() and no atomic operations. Same for freeing, which is otherwise a local double cmpxchg only for short term allocations (so the same slab is still active on the same cpu when freeing the object) and a more costly locked double cmpxchg otherwise. - kfree_rcu() batching and recycling. kfree_rcu() will put objects to a separate percpu sheaf and only submit the whole sheaf to call_rcu() when full. After the grace period, the sheaf can be used for allocations, which is more efficient than freeing and reallocating individual slab objects (even with the batching done by kfree_rcu() implementation itself). In case only some cpus are allowed to handle rcu callbacks, the sheaf can still be made available to other cpus on the same node via the shared barn. The maple_node cache uses kfree_rcu() and thus can benefit from this. Note: this path is currently limited to !PREEMPT_RT - Preallocation support. A prefilled sheaf can be privately borrowed to perform a short term operation that is not allowed to block in the middle and may need to allocate some objects. If an upper bound (worst case) for the number of allocations is known, but only much fewer allocations actually needed on average, borrowing and returning a sheaf is much more efficient then a bulk allocation for the worst case followed by a bulk free of the many unused objects. Maple tree write operations should benefit from this. - Compatibility with slub_debug. When slub_debug is enabled for a cache, we simply don't create the percpu sheaves so that the debugging hooks (at the node partial list slowpaths) are reached as before. The same thing is done for CONFIG_SLUB_TINY. Sheaf preallocation still works by reusing the (ineffective) paths for requests exceeding the cache's sheaf_capacity. This is in line with the existing approach where debugging bypasses the fast paths and SLUB_TINY preferes memory savings over performance. The above is adapted from the cover letter [1], which contains also in-kernel microbenchmark results showing the lower overhead of sheaves. Results from Suren Baghdasaryan [2] using a mmap/munmap microbenchmark also show improvements. Results from Sudarsan Mahendran [3] using will-it-scale show both benefits and regressions, probably due to overall noisiness of those tests. Link: https://lore.kernel.org/all/[email protected]/ [1] Link: https://lore.kernel.org/all/CAJuCfpEQ%3DRUgcAvRzE5jRrhhFpkm8E2PpBK9e9GhK26ZaJQt%[email protected]/ [2] Link: https://lore.kernel.org/all/[email protected]/ [3]
2 parents f7381b9 + 719a42e commit b912061

File tree

15 files changed

+2280
-1458
lines changed

15 files changed

+2280
-1458
lines changed

include/linux/local_lock_internal.h

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ typedef struct {
1717

1818
/* local_trylock() and local_trylock_irqsave() only work with local_trylock_t */
1919
typedef struct {
20-
local_lock_t llock;
20+
#ifdef CONFIG_DEBUG_LOCK_ALLOC
21+
struct lockdep_map dep_map;
22+
struct task_struct *owner;
23+
#endif
2124
u8 acquired;
2225
} local_trylock_t;
2326

@@ -31,7 +34,7 @@ typedef struct {
3134
.owner = NULL,
3235

3336
# define LOCAL_TRYLOCK_DEBUG_INIT(lockname) \
34-
.llock = { LOCAL_LOCK_DEBUG_INIT((lockname).llock) },
37+
LOCAL_LOCK_DEBUG_INIT(lockname)
3538

3639
static inline void local_lock_acquire(local_lock_t *l)
3740
{
@@ -81,7 +84,7 @@ do { \
8184
local_lock_debug_init(lock); \
8285
} while (0)
8386

84-
#define __local_trylock_init(lock) __local_lock_init(lock.llock)
87+
#define __local_trylock_init(lock) __local_lock_init((local_lock_t *)lock)
8588

8689
#define __spinlock_nested_bh_init(lock) \
8790
do { \

include/linux/maple_tree.h

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -442,7 +442,9 @@ struct ma_state {
442442
struct maple_enode *node; /* The node containing this entry */
443443
unsigned long min; /* The minimum index of this node - implied pivot min */
444444
unsigned long max; /* The maximum index of this node - implied pivot max */
445-
struct maple_alloc *alloc; /* Allocated nodes for this operation */
445+
struct slab_sheaf *sheaf; /* Allocated nodes for this operation */
446+
struct maple_node *alloc; /* A single allocated node for fast path writes */
447+
unsigned long node_request; /* The number of nodes to allocate for this operation */
446448
enum maple_status status; /* The status of the state (active, start, none, etc) */
447449
unsigned char depth; /* depth of tree descent during write */
448450
unsigned char offset;
@@ -490,7 +492,9 @@ struct ma_wr_state {
490492
.status = ma_start, \
491493
.min = 0, \
492494
.max = ULONG_MAX, \
495+
.sheaf = NULL, \
493496
.alloc = NULL, \
497+
.node_request = 0, \
494498
.mas_flags = 0, \
495499
.store_type = wr_invalid, \
496500
}

include/linux/slab.h

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -335,6 +335,37 @@ struct kmem_cache_args {
335335
* %NULL means no constructor.
336336
*/
337337
void (*ctor)(void *);
338+
/**
339+
* @sheaf_capacity: Enable sheaves of given capacity for the cache.
340+
*
341+
* With a non-zero value, allocations from the cache go through caching
342+
* arrays called sheaves. Each cpu has a main sheaf that's always
343+
* present, and a spare sheaf that may be not present. When both become
344+
* empty, there's an attempt to replace an empty sheaf with a full sheaf
345+
* from the per-node barn.
346+
*
347+
* When no full sheaf is available, and gfp flags allow blocking, a
348+
* sheaf is allocated and filled from slab(s) using bulk allocation.
349+
* Otherwise the allocation falls back to the normal operation
350+
* allocating a single object from a slab.
351+
*
352+
* Analogically when freeing and both percpu sheaves are full, the barn
353+
* may replace it with an empty sheaf, unless it's over capacity. In
354+
* that case a sheaf is bulk freed to slab pages.
355+
*
356+
* The sheaves do not enforce NUMA placement of objects, so allocations
357+
* via kmem_cache_alloc_node() with a node specified other than
358+
* NUMA_NO_NODE will bypass them.
359+
*
360+
* Bulk allocation and free operations also try to use the cpu sheaves
361+
* and barn, but fallback to using slab pages directly.
362+
*
363+
* When slub_debug is enabled for the cache, the sheaf_capacity argument
364+
* is ignored.
365+
*
366+
* %0 means no sheaves will be created.
367+
*/
368+
unsigned int sheaf_capacity;
338369
};
339370

340371
struct kmem_cache *__kmem_cache_create_args(const char *name,
@@ -798,6 +829,22 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags,
798829
int node) __assume_slab_alignment __malloc;
799830
#define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_noprof(__VA_ARGS__))
800831

832+
struct slab_sheaf *
833+
kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size);
834+
835+
int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
836+
struct slab_sheaf **sheafp, unsigned int size);
837+
838+
void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp,
839+
struct slab_sheaf *sheaf);
840+
841+
void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_t gfp,
842+
struct slab_sheaf *sheaf) __assume_slab_alignment __malloc;
843+
#define kmem_cache_alloc_from_sheaf(...) \
844+
alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__VA_ARGS__))
845+
846+
unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf);
847+
801848
/*
802849
* These macros allow declaring a kmem_buckets * parameter alongside size, which
803850
* can be compiled out with CONFIG_SLAB_BUCKETS=n so that a large number of call

0 commit comments

Comments
 (0)