Skip to content

Commit 58f327f

Browse files
ZhangPengakpm00
authored andcommitted
filemap: avoid unnecessary major faults in filemap_fault()
A major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE) in application, which leading to an unexpected issue[1]. This is caused by temporarily cleared PTE during a read+clear/modify/write update of the PTE, eg, do_numa_page()/change_pte_range(). For the data segment of the user-mode program, the global variable area is a private mapping. After the pagecache is loaded, the private anonymous page is generated after the COW is triggered. Mlockall can lock COW pages (anonymous pages), but the original file pages cannot be locked and may be reclaimed. If the global variable (private anon page) is accessed when vmf->pte is zeroed in numa fault, a file page fault will be triggered. At this time, the original private file page may have been reclaimed. If the page cache is not available at this time, a major fault will be triggered and the file will be read, causing additional overhead. This issue affects our traffic analysis service. The inbound traffic is heavy. If a major fault occurs, the I/O schedule is triggered and the original I/O is suspended. Generally, the I/O schedule is 0.7 ms. If other applications are operating disks, the system needs to wait for more than 10 ms. However, the inbound traffic is heavy and the NIC buffer is small. As a result, packet loss occurs. But the traffic analysis service can't tolerate packet loss. Fix this by holding PTL and rechecking the PTE in filemap_fault() before triggering a major fault. We do this check only if vma is VM_LOCKED to reduce the performance impact in common scenarios. In our product environment, there were 7 major faults every 12 hours. After the patch is applied, no major fault have been triggered. Testing file page read and write page fault performance in ext4 and ramdisk using will-it-scale[2] on a x86 physical machine. The data is the average change compared with the mainline after the patch is applied. The test results are within the range of fluctuation. We do this check only if vma is VM_LOCKED, therefore, no performance regressions is caused for most common cases. The test results are as follows: processes processes_idle threads threads_idle ext4 private file write: 0.22% 0.26% 1.21% -0.15% ext4 private file read: 0.03% 1.00% 1.39% 0.34% ext4 shared file write: -0.50% -0.02% -0.14% -0.02% ramdisk private file write: 0.07% 0.02% 0.53% 0.04% ramdisk private file read: 0.01% 1.60% -0.32% -0.02% [1] https://lore.kernel.org/linux-mm/[email protected]/ [2] https://github.com/antonblanchard/will-it-scale/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: ZhangPeng <[email protected]> Signed-off-by: Kefeng Wang <[email protected]> Suggested-by: "Huang, Ying" <[email protected]> Suggested-by: David Hildenbrand <[email protected]> Reviewed-by: "Huang, Ying" <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 4839e79 commit 58f327f

File tree

1 file changed

+46
-0
lines changed

1 file changed

+46
-0
lines changed

mm/filemap.c

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3181,6 +3181,48 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
31813181
return fpin;
31823182
}
31833183

3184+
static vm_fault_t filemap_fault_recheck_pte_none(struct vm_fault *vmf)
3185+
{
3186+
struct vm_area_struct *vma = vmf->vma;
3187+
vm_fault_t ret = 0;
3188+
pte_t *ptep;
3189+
3190+
/*
3191+
* We might have COW'ed a pagecache folio and might now have an mlocked
3192+
* anon folio mapped. The original pagecache folio is not mlocked and
3193+
* might have been evicted. During a read+clear/modify/write update of
3194+
* the PTE, such as done in do_numa_page()/change_pte_range(), we
3195+
* temporarily clear the PTE under PT lock and might detect it here as
3196+
* "none" when not holding the PT lock.
3197+
*
3198+
* Not rechecking the PTE under PT lock could result in an unexpected
3199+
* major fault in an mlock'ed region. Recheck only for this special
3200+
* scenario while holding the PT lock, to not degrade non-mlocked
3201+
* scenarios. Recheck the PTE without PT lock firstly, thereby reducing
3202+
* the number of times we hold PT lock.
3203+
*/
3204+
if (!(vma->vm_flags & VM_LOCKED))
3205+
return 0;
3206+
3207+
if (!(vmf->flags & FAULT_FLAG_ORIG_PTE_VALID))
3208+
return 0;
3209+
3210+
ptep = pte_offset_map(vmf->pmd, vmf->address);
3211+
if (unlikely(!ptep))
3212+
return VM_FAULT_NOPAGE;
3213+
3214+
if (unlikely(!pte_none(ptep_get_lockless(ptep)))) {
3215+
ret = VM_FAULT_NOPAGE;
3216+
} else {
3217+
spin_lock(vmf->ptl);
3218+
if (unlikely(!pte_none(ptep_get(ptep))))
3219+
ret = VM_FAULT_NOPAGE;
3220+
spin_unlock(vmf->ptl);
3221+
}
3222+
pte_unmap(ptep);
3223+
return ret;
3224+
}
3225+
31843226
/**
31853227
* filemap_fault - read in file data for page fault handling
31863228
* @vmf: struct vm_fault containing details of the fault
@@ -3236,6 +3278,10 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
32363278
mapping_locked = true;
32373279
}
32383280
} else {
3281+
ret = filemap_fault_recheck_pte_none(vmf);
3282+
if (unlikely(ret))
3283+
return ret;
3284+
32393285
/* No page in the page cache at all */
32403286
count_vm_event(PGMAJFAULT);
32413287
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);

0 commit comments

Comments
 (0)