Skip to content

Commit 0ea680e

Browse files
committed
Merge tag 'slab-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab updates from Vlastimil Babka: - Freelist loading optimization (Chengming Zhou) When the per-cpu slab is depleted and a new one loaded from the cpu partial list, optimize the loading to avoid an irq enable/disable cycle. This results in a 3.5% performance improvement on the "perf bench sched messaging" test. - Kernel boot parameters cleanup after SLAB removal (Xiongwei Song) Due to two different main slab implementations we've had boot parameters prefixed either slab_ and slub_ with some later becoming an alias as both implementations gained the same functionality (i.e. slab_nomerge vs slub_nomerge). In order to eventually get rid of the implementation-specific names, the canonical and documented parameters are now all prefixed slab_ and the slub_ variants become deprecated but still working aliases. - SLAB_ kmem_cache creation flags cleanup (Vlastimil Babka) The flags had hardcoded #define values which became tedious and error-prone when adding new ones. Assign the values via an enum that takes care of providing unique bit numbers. Also deprecate SLAB_MEM_SPREAD which was only used by SLAB, so it's a no-op since SLAB removal. Assign it an explicit zero value. The removals of the flag usage are handled independently in the respective subsystems, with a final removal of any leftover usage planned for the next release. - Misc cleanups and fixes (Chengming Zhou, Xiaolei Wang, Zheng Yejian) Includes removal of unused code or function parameters and a fix of a memleak. * tag 'slab-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: slab: remove PARTIAL_NODE slab_state mm, slab: remove memcg_from_slab_obj() mm, slab: remove the corner case of inc_slabs_node() mm/slab: Fix a kmemleak in kmem_cache_destroy() mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE mm, slab: use an enum to define SLAB_ cache creation flags mm, slab: deprecate SLAB_MEM_SPREAD flag mm, slab: fix the comment of cpu partial list mm, slab: remove unused object_size parameter in kmem_cache_flags() mm/slub: remove parameter 'flags' in create_kmalloc_caches() mm/slub: remove unused parameter in next_freelist_entry() mm/slub: remove full list manipulation for non-debug slab mm/slub: directly load freelist from cpu partial slab in the likely case mm/slub: make the description of slab_min_objects helpful in doc mm/slub: replace slub_$params with slab_$params in slub.rst mm/slub: unify all sl[au]b parameters with "slab_$param" Documentation: kernel-parameters: remove noaliencache
2 parents cc4a875 + 1a1c4e4 commit 0ea680e

File tree

11 files changed

+210
-215
lines changed

11 files changed

+210
-215
lines changed

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 32 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -3771,10 +3771,6 @@
37713771
no5lvl [X86-64,RISCV,EARLY] Disable 5-level paging mode. Forces
37723772
kernel to use 4-level paging instead.
37733773

3774-
noaliencache [MM, NUMA, SLAB] Disables the allocation of alien
3775-
caches in the slab allocator. Saves per-node memory,
3776-
but will impact performance.
3777-
37783774
noalign [KNL,ARM]
37793775

37803776
noaltinstr [S390,EARLY] Disables alternative instructions
@@ -5930,65 +5926,58 @@
59305926
simeth= [IA-64]
59315927
simscsi=
59325928

5933-
slram= [HW,MTD]
5934-
5935-
slab_merge [MM]
5936-
Enable merging of slabs with similar size when the
5937-
kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
5938-
5939-
slab_nomerge [MM]
5940-
Disable merging of slabs with similar size. May be
5941-
necessary if there is some reason to distinguish
5942-
allocs to different slabs, especially in hardened
5943-
environments where the risk of heap overflows and
5944-
layout control by attackers can usually be
5945-
frustrated by disabling merging. This will reduce
5946-
most of the exposure of a heap attack to a single
5947-
cache (risks via metadata attacks are mostly
5948-
unchanged). Debug options disable merging on their
5949-
own.
5950-
For more information see Documentation/mm/slub.rst.
5951-
5952-
slab_max_order= [MM, SLAB]
5953-
Determines the maximum allowed order for slabs.
5954-
A high setting may cause OOMs due to memory
5955-
fragmentation. Defaults to 1 for systems with
5956-
more than 32MB of RAM, 0 otherwise.
5957-
5958-
slub_debug[=options[,slabs][;[options[,slabs]]...] [MM, SLUB]
5959-
Enabling slub_debug allows one to determine the
5929+
slab_debug[=options[,slabs][;[options[,slabs]]...] [MM]
5930+
Enabling slab_debug allows one to determine the
59605931
culprit if slab objects become corrupted. Enabling
5961-
slub_debug can create guard zones around objects and
5932+
slab_debug can create guard zones around objects and
59625933
may poison objects when not in use. Also tracks the
59635934
last alloc / free. For more information see
59645935
Documentation/mm/slub.rst.
5936+
(slub_debug legacy name also accepted for now)
59655937

5966-
slub_max_order= [MM, SLUB]
5938+
slab_max_order= [MM]
59675939
Determines the maximum allowed order for slabs.
59685940
A high setting may cause OOMs due to memory
59695941
fragmentation. For more information see
59705942
Documentation/mm/slub.rst.
5943+
(slub_max_order legacy name also accepted for now)
5944+
5945+
slab_merge [MM]
5946+
Enable merging of slabs with similar size when the
5947+
kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
5948+
(slub_merge legacy name also accepted for now)
59715949

5972-
slub_min_objects= [MM, SLUB]
5950+
slab_min_objects= [MM]
59735951
The minimum number of objects per slab. SLUB will
5974-
increase the slab order up to slub_max_order to
5952+
increase the slab order up to slab_max_order to
59755953
generate a sufficiently large slab able to contain
59765954
the number of objects indicated. The higher the number
59775955
of objects the smaller the overhead of tracking slabs
59785956
and the less frequently locks need to be acquired.
59795957
For more information see Documentation/mm/slub.rst.
5958+
(slub_min_objects legacy name also accepted for now)
59805959

5981-
slub_min_order= [MM, SLUB]
5960+
slab_min_order= [MM]
59825961
Determines the minimum page order for slabs. Must be
5983-
lower than slub_max_order.
5984-
For more information see Documentation/mm/slub.rst.
5962+
lower or equal to slab_max_order. For more information see
5963+
Documentation/mm/slub.rst.
5964+
(slub_min_order legacy name also accepted for now)
59855965

5986-
slub_merge [MM, SLUB]
5987-
Same with slab_merge.
5966+
slab_nomerge [MM]
5967+
Disable merging of slabs with similar size. May be
5968+
necessary if there is some reason to distinguish
5969+
allocs to different slabs, especially in hardened
5970+
environments where the risk of heap overflows and
5971+
layout control by attackers can usually be
5972+
frustrated by disabling merging. This will reduce
5973+
most of the exposure of a heap attack to a single
5974+
cache (risks via metadata attacks are mostly
5975+
unchanged). Debug options disable merging on their
5976+
own.
5977+
For more information see Documentation/mm/slub.rst.
5978+
(slub_nomerge legacy name also accepted for now)
59885979

5989-
slub_nomerge [MM, SLUB]
5990-
Same with slab_nomerge. This is supported for legacy.
5991-
See slab_nomerge for more information.
5980+
slram= [HW,MTD]
59925981

59935982
smart2= [HW]
59945983
Format: <io1>[,<io2>[,...,<io8>]]

Documentation/mm/slub.rst

Lines changed: 30 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ SLUB can enable debugging only for selected slabs in order to avoid
99
an impact on overall system performance which may make a bug more
1010
difficult to find.
1111

12-
In order to switch debugging on one can add an option ``slub_debug``
12+
In order to switch debugging on one can add an option ``slab_debug``
1313
to the kernel command line. That will enable full debugging for
1414
all slabs.
1515

@@ -26,16 +26,16 @@ be enabled on the command line. F.e. no tracking information will be
2626
available without debugging on and validation can only partially
2727
be performed if debugging was not switched on.
2828

29-
Some more sophisticated uses of slub_debug:
29+
Some more sophisticated uses of slab_debug:
3030
-------------------------------------------
3131

32-
Parameters may be given to ``slub_debug``. If none is specified then full
32+
Parameters may be given to ``slab_debug``. If none is specified then full
3333
debugging is enabled. Format:
3434

35-
slub_debug=<Debug-Options>
35+
slab_debug=<Debug-Options>
3636
Enable options for all slabs
3737

38-
slub_debug=<Debug-Options>,<slab name1>,<slab name2>,...
38+
slab_debug=<Debug-Options>,<slab name1>,<slab name2>,...
3939
Enable options only for select slabs (no spaces
4040
after a comma)
4141

@@ -60,52 +60,52 @@ Possible debug options are::
6060

6161
F.e. in order to boot just with sanity checks and red zoning one would specify::
6262

63-
slub_debug=FZ
63+
slab_debug=FZ
6464

6565
Trying to find an issue in the dentry cache? Try::
6666

67-
slub_debug=,dentry
67+
slab_debug=,dentry
6868

6969
to only enable debugging on the dentry cache. You may use an asterisk at the
7070
end of the slab name, in order to cover all slabs with the same prefix. For
7171
example, here's how you can poison the dentry cache as well as all kmalloc
7272
slabs::
7373

74-
slub_debug=P,kmalloc-*,dentry
74+
slab_debug=P,kmalloc-*,dentry
7575

7676
Red zoning and tracking may realign the slab. We can just apply sanity checks
7777
to the dentry cache with::
7878

79-
slub_debug=F,dentry
79+
slab_debug=F,dentry
8080

8181
Debugging options may require the minimum possible slab order to increase as
8282
a result of storing the metadata (for example, caches with PAGE_SIZE object
8383
sizes). This has a higher liklihood of resulting in slab allocation errors
8484
in low memory situations or if there's high fragmentation of memory. To
8585
switch off debugging for such caches by default, use::
8686

87-
slub_debug=O
87+
slab_debug=O
8888

8989
You can apply different options to different list of slab names, using blocks
9090
of options. This will enable red zoning for dentry and user tracking for
9191
kmalloc. All other slabs will not get any debugging enabled::
9292

93-
slub_debug=Z,dentry;U,kmalloc-*
93+
slab_debug=Z,dentry;U,kmalloc-*
9494

9595
You can also enable options (e.g. sanity checks and poisoning) for all caches
9696
except some that are deemed too performance critical and don't need to be
9797
debugged by specifying global debug options followed by a list of slab names
9898
with "-" as options::
9999

100-
slub_debug=FZ;-,zs_handle,zspage
100+
slab_debug=FZ;-,zs_handle,zspage
101101

102102
The state of each debug option for a slab can be found in the respective files
103103
under::
104104

105105
/sys/kernel/slab/<slab name>/
106106

107107
If the file contains 1, the option is enabled, 0 means disabled. The debug
108-
options from the ``slub_debug`` parameter translate to the following files::
108+
options from the ``slab_debug`` parameter translate to the following files::
109109

110110
F sanity_checks
111111
Z red_zone
@@ -129,7 +129,7 @@ in order to reduce overhead and increase cache hotness of objects.
129129
Slab validation
130130
===============
131131

132-
SLUB can validate all object if the kernel was booted with slub_debug. In
132+
SLUB can validate all object if the kernel was booted with slab_debug. In
133133
order to do so you must have the ``slabinfo`` tool. Then you can do
134134
::
135135

@@ -150,29 +150,29 @@ list_lock once in a while to deal with partial slabs. That overhead is
150150
governed by the order of the allocation for each slab. The allocations
151151
can be influenced by kernel parameters:
152152

153-
.. slub_min_objects=x (default 4)
154-
.. slub_min_order=x (default 0)
155-
.. slub_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
153+
.. slab_min_objects=x (default: automatically scaled by number of cpus)
154+
.. slab_min_order=x (default 0)
155+
.. slab_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
156156
157-
``slub_min_objects``
157+
``slab_min_objects``
158158
allows to specify how many objects must at least fit into one
159159
slab in order for the allocation order to be acceptable. In
160160
general slub will be able to perform this number of
161161
allocations on a slab without consulting centralized resources
162162
(list_lock) where contention may occur.
163163

164-
``slub_min_order``
164+
``slab_min_order``
165165
specifies a minimum order of slabs. A similar effect like
166-
``slub_min_objects``.
166+
``slab_min_objects``.
167167

168-
``slub_max_order``
169-
specified the order at which ``slub_min_objects`` should no
168+
``slab_max_order``
169+
specified the order at which ``slab_min_objects`` should no
170170
longer be checked. This is useful to avoid SLUB trying to
171-
generate super large order pages to fit ``slub_min_objects``
171+
generate super large order pages to fit ``slab_min_objects``
172172
of a slab cache with large object sizes into one high order
173173
page. Setting command line parameter
174174
``debug_guardpage_minorder=N`` (N > 0), forces setting
175-
``slub_max_order`` to 0, what cause minimum possible order of
175+
``slab_max_order`` to 0, what cause minimum possible order of
176176
slabs allocation.
177177

178178
SLUB Debug output
@@ -219,7 +219,7 @@ Here is a sample of slub debug output::
219219
FIX kmalloc-8: Restoring Redzone 0xc90f6d28-0xc90f6d2b=0xcc
220220

221221
If SLUB encounters a corrupted object (full detection requires the kernel
222-
to be booted with slub_debug) then the following output will be dumped
222+
to be booted with slab_debug) then the following output will be dumped
223223
into the syslog:
224224

225225
1. Description of the problem encountered
@@ -239,7 +239,7 @@ into the syslog:
239239
pid=<pid of the process>
240240

241241
(Object allocation / free information is only available if SLAB_STORE_USER is
242-
set for the slab. slub_debug sets that option)
242+
set for the slab. slab_debug sets that option)
243243

244244
2. The object contents if an object was involved.
245245

@@ -262,7 +262,7 @@ into the syslog:
262262
the object boundary.
263263

264264
(Redzone information is only available if SLAB_RED_ZONE is set.
265-
slub_debug sets that option)
265+
slab_debug sets that option)
266266

267267
Padding <address> : <bytes>
268268
Unused data to fill up the space in order to get the next object
@@ -296,7 +296,7 @@ Emergency operations
296296

297297
Minimal debugging (sanity checks alone) can be enabled by booting with::
298298

299-
slub_debug=F
299+
slab_debug=F
300300

301301
This will be generally be enough to enable the resiliency features of slub
302302
which will keep the system running even if a bad kernel component will
@@ -311,13 +311,13 @@ and enabling debugging only for that cache
311311

312312
I.e.::
313313

314-
slub_debug=F,dentry
314+
slab_debug=F,dentry
315315

316316
If the corruption occurs by writing after the end of the object then it
317317
may be advisable to enable a Redzone to avoid corrupting the beginning
318318
of other objects::
319319

320-
slub_debug=FZ,dentry
320+
slab_debug=FZ,dentry
321321

322322
Extended slabinfo mode and plotting
323323
===================================

drivers/misc/lkdtm/heap.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ static void lkdtm_VMALLOC_LINEAR_OVERFLOW(void)
4848
* correctly.
4949
*
5050
* This should get caught by either memory tagging, KASan, or by using
51-
* CONFIG_SLUB_DEBUG=y and slub_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y).
51+
* CONFIG_SLUB_DEBUG=y and slab_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y).
5252
*/
5353
static void lkdtm_SLAB_LINEAR_OVERFLOW(void)
5454
{

include/linux/kasan.h

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -429,7 +429,6 @@ struct kasan_cache {
429429
};
430430

431431
size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object);
432-
slab_flags_t kasan_never_merge(void);
433432
void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
434433
slab_flags_t *flags);
435434

@@ -446,11 +445,6 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache,
446445
{
447446
return 0;
448447
}
449-
/* And thus nothing prevents cache merging. */
450-
static inline slab_flags_t kasan_never_merge(void)
451-
{
452-
return 0;
453-
}
454448
/* And no cache-related metadata initialization is required. */
455449
static inline void kasan_cache_create(struct kmem_cache *cache,
456450
unsigned int *size,

0 commit comments

Comments
 (0)