Skip to content

Commit 12df140

Browse files
rikvanrielakpm00
authored andcommitted
mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages
The h->*_huge_pages counters are protected by the hugetlb_lock, but alloc_huge_page has a corner case where it can decrement the counter outside of the lock. This could lead to a corrupted value of h->resv_huge_pages, which we have observed on our systems. Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid a potential race. Link: https://lkml.kernel.org/r/[email protected] Fixes: a88c769 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count") Signed-off-by: Rik van Riel <[email protected]> Reviewed-by: Mike Kravetz <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: Glen McCready <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Muchun Song <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent a57b705 commit 12df140

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

mm/hugetlb.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
29242924
page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
29252925
if (!page)
29262926
goto out_uncharge_cgroup;
2927+
spin_lock_irq(&hugetlb_lock);
29272928
if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
29282929
SetHPageRestoreReserve(page);
29292930
h->resv_huge_pages--;
29302931
}
2931-
spin_lock_irq(&hugetlb_lock);
29322932
list_add(&page->lru, &h->hugepage_activelist);
29332933
set_page_refcounted(page);
29342934
/* Fall through */

0 commit comments

Comments
 (0)