Skip to content

Commit 76ddaee

Browse files
laoarKernel Patches Daemon
authored andcommitted
mm: thp: enable THP allocation exclusively through khugepaged
khugepaged_enter_vma() ultimately invokes any attached BPF function with the TVA_KHUGEPAGED flag set when determining whether or not to enable khugepaged THP for a freshly faulted in VMA. Currently, on fault, we invoke this in do_huge_pmd_anonymous_page(), as invoked by create_huge_pmd() and only when we have already checked to see if an allowable TVA_PAGEFAULT order is specified. Since we might want to disallow THP on fault-in but allow it via khugepaged, we move things around so we always attempt to enter khugepaged upon fault. This change is safe because: - khugepaged operates at the MM level rather than per-VMA. The THP allocation might fail during page faults due to transient conditions (e.g., memory pressure), it is safe to add this MM to khugepaged for subsequent defragmentation. - If __thp_vma_allowable_orders(TVA_PAGEFAULT) returns 0, then __thp_vma_allowable_orders(TVA_KHUGEPAGED) will also return 0. While we could also extend prctl() to utilize this new policy, such a change would require a uAPI modification to PR_SET_THP_DISABLE. Signed-off-by: Yafang Shao <[email protected]> Acked-by: Lance Yang <[email protected]> Cc: Usama Arif <[email protected]>
1 parent f376e1c commit 76ddaee

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed

mm/huge_memory.c

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1346,7 +1346,6 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
13461346
ret = vmf_anon_prepare(vmf);
13471347
if (ret)
13481348
return ret;
1349-
khugepaged_enter_vma(vma);
13501349

13511350
if (!(vmf->flags & FAULT_FLAG_WRITE) &&
13521351
!mm_forbids_zeropage(vma->vm_mm) &&

mm/memory.c

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6283,11 +6283,14 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
62836283
if (pud_trans_unstable(vmf.pud))
62846284
goto retry_pud;
62856285

6286-
if (pmd_none(*vmf.pmd) &&
6287-
thp_vma_allowable_order(vma, TVA_PAGEFAULT, PMD_ORDER)) {
6288-
ret = create_huge_pmd(&vmf);
6289-
if (!(ret & VM_FAULT_FALLBACK))
6290-
return ret;
6286+
if (pmd_none(*vmf.pmd)) {
6287+
if (vma_is_anonymous(vma))
6288+
khugepaged_enter_vma(vma);
6289+
if (thp_vma_allowable_order(vma, TVA_PAGEFAULT, PMD_ORDER)) {
6290+
ret = create_huge_pmd(&vmf);
6291+
if (!(ret & VM_FAULT_FALLBACK))
6292+
return ret;
6293+
}
62916294
} else {
62926295
vmf.orig_pmd = pmdp_get_lockless(vmf.pmd);
62936296

0 commit comments

Comments
 (0)