Skip to content

Commit def9b71

Browse files
ptesariktorvalds
authored andcommitted
include/linux/mmzone.h: fix explanation of lower bits in the SPARSEMEM mem_map pointer
The comment is confusing. On the one hand, it refers to 32-bit alignment (struct page alignment on 32-bit platforms), but this would only guarantee that the 2 lowest bits must be zero. On the other hand, it claims that at least 3 bits are available, and 3 bits are actually used. This is not broken, because there is a stronger alignment guarantee, just less obvious. Let's fix the comment to make it clear how many bits are available and why. Although memmap arrays are allocated in various places, the resulting pointer is encoded eventually, so I am adding a BUG_ON() here to enforce at runtime that all expected bits are indeed available. I have also added a BUILD_BUG_ON to check that PFN_SECTION_SHIFT is sufficient, because this part of the calculation can be easily checked at build time. [[email protected]: v2] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Petr Tesarik <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Kemi Wang <[email protected]> Cc: YASUAKI ISHIMATSU <[email protected]> Cc: Andrey Ryabinin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 112d2d2 commit def9b71

File tree

2 files changed

+15
-3
lines changed

2 files changed

+15
-3
lines changed

include/linux/mmzone.h

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1166,8 +1166,16 @@ extern unsigned long usemap_size(void);
11661166

11671167
/*
11681168
* We use the lower bits of the mem_map pointer to store
1169-
* a little bit of information. There should be at least
1170-
* 3 bits here due to 32-bit alignment.
1169+
* a little bit of information. The pointer is calculated
1170+
* as mem_map - section_nr_to_pfn(pnum). The result is
1171+
* aligned to the minimum alignment of the two values:
1172+
* 1. All mem_map arrays are page-aligned.
1173+
* 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
1174+
* lowest bits. PFN_SECTION_SHIFT is arch-specific
1175+
* (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
1176+
* worst combination is powerpc with 256k pages,
1177+
* which results in PFN_SECTION_SHIFT equal 6.
1178+
* To sum it up, at least 6 bits are available.
11711179
*/
11721180
#define SECTION_MARKED_PRESENT (1UL<<0)
11731181
#define SECTION_HAS_MEM_MAP (1UL<<1)

mm/sparse.c

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,11 @@ unsigned long __init node_memmap_size_bytes(int nid, unsigned long start_pfn,
264264
*/
265265
static unsigned long sparse_encode_mem_map(struct page *mem_map, unsigned long pnum)
266266
{
267-
return (unsigned long)(mem_map - (section_nr_to_pfn(pnum)));
267+
unsigned long coded_mem_map =
268+
(unsigned long)(mem_map - (section_nr_to_pfn(pnum)));
269+
BUILD_BUG_ON(SECTION_MAP_LAST_BIT > (1UL<<PFN_SECTION_SHIFT));
270+
BUG_ON(coded_mem_map & ~SECTION_MAP_MASK);
271+
return coded_mem_map;
268272
}
269273

270274
/*

0 commit comments

Comments
 (0)