Skip to content

Commit 246c320

Browse files
kiryltorvalds
authored andcommitted
mm/mmap.c: close race between munmap() and expand_upwards()/downwards()
VMA with VM_GROWSDOWN or VM_GROWSUP flag set can change their size under mmap_read_lock(). It can lead to race with __do_munmap(): Thread A Thread B __do_munmap() detach_vmas_to_be_unmapped() mmap_write_downgrade() expand_downwards() vma->vm_start = address; // The VMA now overlaps with // VMAs detached by the Thread A // page fault populates expanded part // of the VMA unmap_region() // Zaps pagetables partly // populated by Thread B Similar race exists for expand_upwards(). The fix is to avoid downgrading mmap_lock in __do_munmap() if detached VMAs are next to VM_GROWSDOWN or VM_GROWSUP VMA. [[email protected]: s/mmap_sem/mmap_lock/ in comment] Fixes: dd2283f ("mm: mmap: zap pages with read mmap_sem in munmap") Reported-by: Jann Horn <[email protected]> Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Yang Shi <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: <[email protected]> [4.20+] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
1 parent f37e99a commit 246c320

File tree

1 file changed

+14
-2
lines changed

1 file changed

+14
-2
lines changed

mm/mmap.c

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2620,7 +2620,7 @@ static void unmap_region(struct mm_struct *mm,
26202620
* Create a list of vma's touched by the unmap, removing them from the mm's
26212621
* vma list as we go..
26222622
*/
2623-
static void
2623+
static bool
26242624
detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
26252625
struct vm_area_struct *prev, unsigned long end)
26262626
{
@@ -2645,6 +2645,17 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
26452645

26462646
/* Kill the cache */
26472647
vmacache_invalidate(mm);
2648+
2649+
/*
2650+
* Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or
2651+
* VM_GROWSUP VMA. Such VMAs can change their size under
2652+
* down_read(mmap_lock) and collide with the VMA we are about to unmap.
2653+
*/
2654+
if (vma && (vma->vm_flags & VM_GROWSDOWN))
2655+
return false;
2656+
if (prev && (prev->vm_flags & VM_GROWSUP))
2657+
return false;
2658+
return true;
26482659
}
26492660

26502661
/*
@@ -2825,7 +2836,8 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
28252836
}
28262837

28272838
/* Detach vmas from rbtree */
2828-
detach_vmas_to_be_unmapped(mm, vma, prev, end);
2839+
if (!detach_vmas_to_be_unmapped(mm, vma, prev, end))
2840+
downgrade = false;
28292841

28302842
if (downgrade)
28312843
mmap_write_downgrade(mm);

0 commit comments

Comments
 (0)