Skip to content

Commit 93b3037

Browse files
Peter Zijlstrahansendc
authored andcommitted
mm: Update ptep_get_lockless()'s comment
Improve the comment. Suggested-by: Matthew Wilcox <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/20221022114424.515572025%40infradead.org
1 parent 6046362 commit 93b3037

File tree

1 file changed

+6
-9
lines changed

1 file changed

+6
-9
lines changed

include/linux/pgtable.h

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -300,15 +300,12 @@ static inline pte_t ptep_get(pte_t *ptep)
300300

301301
#ifdef CONFIG_GUP_GET_PTE_LOW_HIGH
302302
/*
303-
* WARNING: only to be used in the get_user_pages_fast() implementation.
304-
*
305-
* With get_user_pages_fast(), we walk down the pagetables without taking any
306-
* locks. For this we would like to load the pointers atomically, but sometimes
307-
* that is not possible (e.g. without expensive cmpxchg8b on x86_32 PAE). What
308-
* we do have is the guarantee that a PTE will only either go from not present
309-
* to present, or present to not present or both -- it will not switch to a
310-
* completely different present page without a TLB flush in between; something
311-
* that we are blocking by holding interrupts off.
303+
* For walking the pagetables without holding any locks. Some architectures
304+
* (eg x86-32 PAE) cannot load the entries atomically without using expensive
305+
* instructions. We are guaranteed that a PTE will only either go from not
306+
* present to present, or present to not present -- it will not switch to a
307+
* completely different present page without a TLB flush inbetween; which we
308+
* are blocking by holding interrupts off.
312309
*
313310
* Setting ptes from not present to present goes:
314311
*

0 commit comments

Comments
 (0)