Skip to content

Commit e8d780d

Browse files
committed
Merge tag 'slab-for-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab updates from Vlastimil Babka: - Convert struct slab to its own flags instead of referencing page flags, which is another preparation step before separating it from struct page completely. Along with that, a bunch of documentation fixes and cleanups (Matthew Wilcox) - Convert large kmalloc to use frozen pages in order to be consistent with non-large kmalloc slabs (Vlastimil Babka) - MAINTAINERS updates (Matthew Wilcox, Lorenzo Stoakes) - Restore NUMA policy support for large kmalloc, broken by mistake in v6.1 (Vlastimil Babka) * tag 'slab-for-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: MAINTAINERS: add missing files to slab section slab: Update MAINTAINERS entry memcg_slabinfo: Fix use of PG_slab kfence: Remove mention of PG_slab vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb doc: Add slab internal kernel-doc slub: Fix a documentation build error for krealloc() slab: Add SL_pfmemalloc flag slab: Add SL_partial flag slab: Rename slab->__page_flags to slab->flags doc: Move SLUB documentation to the admin guide mm, slab: use frozen pages for large kmalloc mm, slab: restore NUMA policy support for large kmalloc
2 parents 2db4df0 + 8185696 commit e8d780d

File tree

13 files changed

+110
-80
lines changed

13 files changed

+110
-80
lines changed

Documentation/ABI/testing/sysfs-kernel-slab

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,8 @@ Description:
3737
The alloc_calls file is read-only and lists the kernel code
3838
locations from which allocations for this cache were performed.
3939
The alloc_calls file only contains information if debugging is
40-
enabled for that cache (see Documentation/mm/slub.rst).
40+
enabled for that cache (see
41+
Documentation/admin-guide/mm/slab.rst).
4142

4243
What: /sys/kernel/slab/<cache>/alloc_fastpath
4344
Date: February 2008
@@ -219,7 +220,7 @@ Contact: Pekka Enberg <[email protected]>,
219220
Description:
220221
The free_calls file is read-only and lists the locations of
221222
object frees if slab debugging is enabled (see
222-
Documentation/mm/slub.rst).
223+
Documentation/admin-guide/mm/slab.rst).
223224

224225
What: /sys/kernel/slab/<cache>/free_fastpath
225226
Date: February 2008

Documentation/admin-guide/kdump/vmcoreinfo.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -325,14 +325,14 @@ NR_FREE_PAGES
325325
On linux-2.6.21 or later, the number of free pages is in
326326
vm_stat[NR_FREE_PAGES]. Used to get the number of free pages.
327327

328-
PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_slab|PG_hwpoision|PG_head_mask|PG_hugetlb
329-
-----------------------------------------------------------------------------------------
328+
PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_hwpoison|PG_head_mask
329+
--------------------------------------------------------------------------
330330

331331
Page attributes. These flags are used to filter various unnecessary for
332332
dumping pages.
333333

334-
PAGE_BUDDY_MAPCOUNT_VALUE(~PG_buddy)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_offline)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_unaccepted)
335-
-------------------------------------------------------------------------------------------------------------------------
334+
PAGE_SLAB_MAPCOUNT_VALUE|PAGE_BUDDY_MAPCOUNT_VALUE|PAGE_OFFLINE_MAPCOUNT_VALUE|PAGE_HUGETLB_MAPCOUNT_VALUE|PAGE_UNACCEPTED_MAPCOUNT_VALUE
335+
------------------------------------------------------------------------------------------------------------------------------------------
336336

337337
More page attributes. These flags are used to filter various unnecessary for
338338
dumping pages.

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6587,14 +6587,14 @@
65876587
slab_debug can create guard zones around objects and
65886588
may poison objects when not in use. Also tracks the
65896589
last alloc / free. For more information see
6590-
Documentation/mm/slub.rst.
6590+
Documentation/admin-guide/mm/slab.rst.
65916591
(slub_debug legacy name also accepted for now)
65926592

65936593
slab_max_order= [MM]
65946594
Determines the maximum allowed order for slabs.
65956595
A high setting may cause OOMs due to memory
65966596
fragmentation. For more information see
6597-
Documentation/mm/slub.rst.
6597+
Documentation/admin-guide/mm/slab.rst.
65986598
(slub_max_order legacy name also accepted for now)
65996599

66006600
slab_merge [MM]
@@ -6609,13 +6609,14 @@
66096609
the number of objects indicated. The higher the number
66106610
of objects the smaller the overhead of tracking slabs
66116611
and the less frequently locks need to be acquired.
6612-
For more information see Documentation/mm/slub.rst.
6612+
For more information see
6613+
Documentation/admin-guide/mm/slab.rst.
66136614
(slub_min_objects legacy name also accepted for now)
66146615

66156616
slab_min_order= [MM]
66166617
Determines the minimum page order for slabs. Must be
66176618
lower or equal to slab_max_order. For more information see
6618-
Documentation/mm/slub.rst.
6619+
Documentation/admin-guide/mm/slab.rst.
66196620
(slub_min_order legacy name also accepted for now)
66206621

66216622
slab_nomerge [MM]
@@ -6629,7 +6630,8 @@
66296630
cache (risks via metadata attacks are mostly
66306631
unchanged). Debug options disable merging on their
66316632
own.
6632-
For more information see Documentation/mm/slub.rst.
6633+
For more information see
6634+
Documentation/admin-guide/mm/slab.rst.
66336635
(slub_nomerge legacy name also accepted for now)
66346636

66356637
slab_strict_numa [MM]

Documentation/admin-guide/mm/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ the Linux memory management.
3737
numaperf
3838
pagemap
3939
shrinker_debugfs
40+
slab
4041
soft-dirty
4142
swap_numa
4243
transhuge

Documentation/mm/slub.rst renamed to Documentation/admin-guide/mm/slab.rst

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,12 @@
1-
==========================
2-
Short users guide for SLUB
3-
==========================
4-
5-
The basic philosophy of SLUB is very different from SLAB. SLAB
6-
requires rebuilding the kernel to activate debug options for all
7-
slab caches. SLUB always includes full debugging but it is off by default.
8-
SLUB can enable debugging only for selected slabs in order to avoid
9-
an impact on overall system performance which may make a bug more
10-
difficult to find.
1+
========================================
2+
Short users guide for the slab allocator
3+
========================================
4+
5+
The slab allocator includes full debugging support (when built with
6+
CONFIG_SLUB_DEBUG=y) but it is off by default (unless built with
7+
CONFIG_SLUB_DEBUG_ON=y). You can enable debugging only for selected
8+
slabs in order to avoid an impact on overall system performance which
9+
may make a bug more difficult to find.
1110

1211
In order to switch debugging on one can add an option ``slab_debug``
1312
to the kernel command line. That will enable full debugging for

Documentation/mm/index.rst

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ documentation, or deleted if it has served its purpose.
5656
page_owner
5757
page_table_check
5858
remap_file_pages
59-
slub
6059
split_page_table_lock
6160
transhuge
6261
unevictable-lru

Documentation/mm/slab.rst

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,10 @@
33
===============
44
Slab Allocation
55
===============
6+
7+
Functions and structures
8+
========================
9+
10+
.. kernel-doc:: mm/slab.h
11+
.. kernel-doc:: mm/slub.c
12+
:internal:

MAINTAINERS

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23015,17 +23015,24 @@ F: Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
2301523015
F: drivers/nvmem/layouts/sl28vpd.c
2301623016

2301723017
SLAB ALLOCATOR
23018-
M: Christoph Lameter <[email protected]>
23019-
M: David Rientjes <[email protected]>
23020-
M: Andrew Morton <[email protected]>
2302123018
M: Vlastimil Babka <[email protected]>
23019+
M: Andrew Morton <[email protected]>
23020+
R: Christoph Lameter <[email protected]>
23021+
R: David Rientjes <[email protected]>
2302223022
R: Roman Gushchin <[email protected]>
2302323023
R: Harry Yoo <[email protected]>
2302423024
2302523025
S: Maintained
2302623026
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
23027-
F: include/linux/sl?b*.h
23028-
F: mm/sl?b*
23027+
F: Documentation/admin-guide/mm/slab.rst
23028+
F: Documentation/mm/slab.rst
23029+
F: include/linux/mempool.h
23030+
F: include/linux/slab.h
23031+
F: mm/failslab.c
23032+
F: mm/mempool.c
23033+
F: mm/slab.h
23034+
F: mm/slab_common.c
23035+
F: mm/slub.c
2302923036

2303023037
SLCAN CAN NETWORK DRIVER
2303123038
M: Dario Binacchi <[email protected]>

include/linux/mm.h

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1325,6 +1325,8 @@ static inline void get_page(struct page *page)
13251325
struct folio *folio = page_folio(page);
13261326
if (WARN_ON_ONCE(folio_test_slab(folio)))
13271327
return;
1328+
if (WARN_ON_ONCE(folio_test_large_kmalloc(folio)))
1329+
return;
13281330
folio_get(folio);
13291331
}
13301332

@@ -1419,7 +1421,7 @@ static inline void put_page(struct page *page)
14191421
{
14201422
struct folio *folio = page_folio(page);
14211423

1422-
if (folio_test_slab(folio))
1424+
if (folio_test_slab(folio) || folio_test_large_kmalloc(folio))
14231425
return;
14241426

14251427
folio_put(folio);

mm/kfence/core.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -605,8 +605,8 @@ static unsigned long kfence_init_pool(void)
605605
pages = virt_to_page(__kfence_pool);
606606

607607
/*
608-
* Set up object pages: they must have PG_slab set, to avoid freeing
609-
* these as real pages.
608+
* Set up object pages: they must have PGTY_slab set to avoid freeing
609+
* them as real pages.
610610
*
611611
* We also want to avoid inserting kfence_free() in the kfree()
612612
* fast-path in SLUB, and therefore need to ensure kfree() correctly

0 commit comments

Comments
 (0)