Skip to content

Commit 0909c71

Browse files
leitaowilldeacon
authored andcommitted
arm64: Remove CONFIG_VMAP_STACK conditionals from THREAD_SHIFT and THREAD_ALIGN
Now that VMAP_STACK is always enabled on arm64, remove the CONFIG_VMAP_STACK conditional logic from the definitions of THREAD_SHIFT and THREAD_ALIGN in arch/arm64/include/asm/memory.h. This simplifies the code by unconditionally setting THREAD_ALIGN to (2 * THREAD_SIZE) and adjusting the THREAD_SHIFT definition to only depend on MIN_THREAD_SHIFT and PAGE_SHIFT. This change reflects the updated arm64 stack model, where all kernel threads use virtually mapped stacks with guard pages, and ensures alignment and stack sizing are consistently handled. Signed-off-by: Breno Leitao <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent 6382952 commit 0909c71

File tree

1 file changed

+1
-5
lines changed

1 file changed

+1
-5
lines changed

arch/arm64/include/asm/memory.h

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
* VMAP'd stacks are allocated at page granularity, so we must ensure that such
119119
* stacks are a multiple of page size.
120120
*/
121-
#if defined(CONFIG_VMAP_STACK) && (MIN_THREAD_SHIFT < PAGE_SHIFT)
121+
#if (MIN_THREAD_SHIFT < PAGE_SHIFT)
122122
#define THREAD_SHIFT PAGE_SHIFT
123123
#else
124124
#define THREAD_SHIFT MIN_THREAD_SHIFT
@@ -135,11 +135,7 @@
135135
* checking sp & (1 << THREAD_SHIFT), which we can do cheaply in the entry
136136
* assembly.
137137
*/
138-
#ifdef CONFIG_VMAP_STACK
139138
#define THREAD_ALIGN (2 * THREAD_SIZE)
140-
#else
141-
#define THREAD_ALIGN THREAD_SIZE
142-
#endif
143139

144140
#define IRQ_STACK_SIZE THREAD_SIZE
145141

0 commit comments

Comments
 (0)