Skip to content

Commit 3238465

Browse files
torvaldsgregkh
authored andcommitted
xtensa: fix NOMMU build with lock_mm_and_find_vma() conversion
commit d85a143 upstream. It turns out that xtensa has a really odd configuration situation: you can do a no-MMU config, but still have the page fault code enabled. Which doesn't sound all that sensible, but it turns out that xtensa can have protection faults even without the MMU, and we have this: config PFAULT bool "Handle protection faults" if EXPERT && !MMU default y help Handle protection faults. MMU configurations must enable it. noMMU configurations may disable it if used memory map never generates protection faults or faults are always fatal. If unsure, say Y. which completely violated my expectations of the page fault handling. End result: Guenter reports that the xtensa no-MMU builds all fail with arch/xtensa/mm/fault.c: In function ‘do_page_fault’: arch/xtensa/mm/fault.c:133:8: error: implicit declaration of function ‘lock_mm_and_find_vma’ because I never exposed the new lock_mm_and_find_vma() function for the no-MMU case. Doing so is simple enough, and fixes the problem. Reported-and-tested-by: Guenter Roeck <[email protected]> Fixes: a050ba1 ("mm/fault: convert remaining simple cases to lock_mm_and_find_vma()") Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent c2d8925 commit 3238465

File tree

2 files changed

+14
-2
lines changed

2 files changed

+14
-2
lines changed

include/linux/mm.h

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1921,6 +1921,9 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
19211921
void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
19221922
int generic_error_remove_page(struct address_space *mapping, struct page *page);
19231923

1924+
struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
1925+
unsigned long address, struct pt_regs *regs);
1926+
19241927
#ifdef CONFIG_MMU
19251928
extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
19261929
unsigned long address, unsigned int flags,
@@ -1932,8 +1935,6 @@ void unmap_mapping_pages(struct address_space *mapping,
19321935
pgoff_t start, pgoff_t nr, bool even_cows);
19331936
void unmap_mapping_range(struct address_space *mapping,
19341937
loff_t const holebegin, loff_t const holelen, int even_cows);
1935-
struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
1936-
unsigned long address, struct pt_regs *regs);
19371938
#else
19381939
static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
19391940
unsigned long address, unsigned int flags,

mm/nommu.c

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -681,6 +681,17 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
681681
}
682682
EXPORT_SYMBOL(find_vma);
683683

684+
/*
685+
* At least xtensa ends up having protection faults even with no
686+
* MMU.. No stack expansion, at least.
687+
*/
688+
struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
689+
unsigned long addr, struct pt_regs *regs)
690+
{
691+
mmap_read_lock(mm);
692+
return vma_lookup(mm, addr);
693+
}
694+
684695
/*
685696
* expand a stack to a given address
686697
* - not supported under NOMMU conditions

0 commit comments

Comments
 (0)