Skip to content

Commit 980411a

Browse files
committed
powerpc/code-patching: Fix oops with DEBUG_VM enabled
Nathan reported that the new per-cpu mm patching oopses if DEBUG_VM is enabled: ------------[ cut here ]------------ kernel BUG at arch/powerpc/mm/pgtable.c:333! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.1.0-rc2+ #1 Hardware name: IBM PowerNV (emulated by qemu) POWER9 0x4e1200 opal:v7.0 PowerNV ... NIP assert_pte_locked+0x180/0x1a0 LR assert_pte_locked+0x170/0x1a0 Call Trace: 0x60000000 (unreliable) patch_instruction+0x618/0x6d0 arch_prepare_kprobe+0xfc/0x2d0 register_kprobe+0x520/0x7c0 arch_init_kprobes+0x28/0x3c init_kprobes+0x108/0x184 do_one_initcall+0x60/0x2e0 kernel_init_freeable+0x1f0/0x3e0 kernel_init+0x34/0x1d0 ret_from_kernel_thread+0x5c/0x64 It's caused by the assert_spin_locked() failing in assert_pte_locked(). The assert fails because the PTE was unlocked in text_area_cpu_up_mm(), and never relocked. The PTE page shouldn't be freed, the patching_mm is only used for patching on this CPU, only that single PTE is ever mapped, and it's only unmapped at CPU offline. In fact assert_pte_locked() has a special case to ignore init_mm entirely, and the patching_mm is more-or-less like init_mm, so possibly the check could be skipped for patching_mm too. But for now be conservative, and use the proper PTE accessors at patching time, so that the PTE lock is held while the PTE is used. That also avoids the warning in assert_pte_locked(). With that it's no longer necessary to save the PTE in cpu_patching_context for the mm_patch_enabled() case. Fixes: c28c15b ("powerpc/code-patching: Use temporary mm for Radix MMU") Reported-by: Nathan Chancellor <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 1395937 commit 980411a

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

arch/powerpc/lib/code-patching.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,6 @@ static int text_area_cpu_up_mm(unsigned int cpu)
178178

179179
this_cpu_write(cpu_patching_context.mm, mm);
180180
this_cpu_write(cpu_patching_context.addr, addr);
181-
this_cpu_write(cpu_patching_context.pte, pte);
182181

183182
return 0;
184183

@@ -195,7 +194,6 @@ static int text_area_cpu_down_mm(unsigned int cpu)
195194

196195
this_cpu_write(cpu_patching_context.mm, NULL);
197196
this_cpu_write(cpu_patching_context.addr, 0);
198-
this_cpu_write(cpu_patching_context.pte, NULL);
199197

200198
return 0;
201199
}
@@ -289,12 +287,16 @@ static int __do_patch_instruction_mm(u32 *addr, ppc_inst_t instr)
289287
unsigned long pfn = get_patch_pfn(addr);
290288
struct mm_struct *patching_mm;
291289
struct mm_struct *orig_mm;
290+
spinlock_t *ptl;
292291

293292
patching_mm = __this_cpu_read(cpu_patching_context.mm);
294-
pte = __this_cpu_read(cpu_patching_context.pte);
295293
text_poke_addr = __this_cpu_read(cpu_patching_context.addr);
296294
patch_addr = (u32 *)(text_poke_addr + offset_in_page(addr));
297295

296+
pte = get_locked_pte(patching_mm, text_poke_addr, &ptl);
297+
if (!pte)
298+
return -ENOMEM;
299+
298300
__set_pte_at(patching_mm, text_poke_addr, pte, pfn_pte(pfn, PAGE_KERNEL), 0);
299301

300302
/* order PTE update before use, also serves as the hwsync */
@@ -321,6 +323,8 @@ static int __do_patch_instruction_mm(u32 *addr, ppc_inst_t instr)
321323
*/
322324
local_flush_tlb_page_psize(patching_mm, text_poke_addr, mmu_virtual_psize);
323325

326+
pte_unmap_unlock(pte, ptl);
327+
324328
return err;
325329
}
326330

0 commit comments

Comments
 (0)