Skip to content

Commit a5ad574

Browse files
committed
Merge branch 'akpm' (patches from Andrew)
Merge even more updates from Andrew Morton: - a kernel-wide sweep of show_stack() - pagetable cleanups - abstract out accesses to mmap_sem - prep for mmap_sem scalability work - hch's user acess work Subsystems affected by this patch series: debug, mm/pagemap, mm/maccess, mm/documentation. * emailed patches from Andrew Morton <[email protected]>: (93 commits) include/linux/cache.h: expand documentation over __read_mostly maccess: return -ERANGE when probe_kernel_read() fails x86: use non-set_fs based maccess routines maccess: allow architectures to provide kernel probing directly maccess: move user access routines together maccess: always use strict semantics for probe_kernel_read maccess: remove strncpy_from_unsafe tracing/kprobes: handle mixed kernel/userspace probes better bpf: rework the compat kernel probe handling bpf:bpf_seq_printf(): handle potentially unsafe format string better bpf: handle the compat string in bpf_trace_copy_string better bpf: factor out a bpf_trace_copy_string helper maccess: unify the probe kernel arch hooks maccess: remove probe_read_common and probe_write_common maccess: rename strnlen_unsafe_user to strnlen_user_nofault maccess: rename strncpy_from_unsafe_strict to strncpy_from_kernel_nofault maccess: rename strncpy_from_unsafe_user to strncpy_from_user_nofault maccess: update the top of file comment maccess: clarify kerneldoc comments maccess: remove duplicate kerneldoc comments ...
2 parents 013b2de + 4fa7252 commit a5ad574

File tree

941 files changed

+2614
-3696
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

941 files changed

+2614
-3696
lines changed

Documentation/admin-guide/mm/numa_memory_policy.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -364,19 +364,19 @@ follows:
364364

365365
2) for querying the policy, we do not need to take an extra reference on the
366366
target task's task policy nor vma policies because we always acquire the
367-
task's mm's mmap_sem for read during the query. The set_mempolicy() and
368-
mbind() APIs [see below] always acquire the mmap_sem for write when
367+
task's mm's mmap_lock for read during the query. The set_mempolicy() and
368+
mbind() APIs [see below] always acquire the mmap_lock for write when
369369
installing or replacing task or vma policies. Thus, there is no possibility
370370
of a task or thread freeing a policy while another task or thread is
371371
querying it.
372372

373373
3) Page allocation usage of task or vma policy occurs in the fault path where
374-
we hold them mmap_sem for read. Again, because replacing the task or vma
375-
policy requires that the mmap_sem be held for write, the policy can't be
374+
we hold them mmap_lock for read. Again, because replacing the task or vma
375+
policy requires that the mmap_lock be held for write, the policy can't be
376376
freed out from under us while we're using it for page allocation.
377377

378378
4) Shared policies require special consideration. One task can replace a
379-
shared memory policy while another task, with a distinct mmap_sem, is
379+
shared memory policy while another task, with a distinct mmap_lock, is
380380
querying or allocating a page based on the policy. To resolve this
381381
potential race, the shared policy infrastructure adds an extra reference
382382
to the shared policy during lookup while holding a spin lock on the shared

Documentation/admin-guide/mm/userfaultfd.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ memory ranges) provides two primary functionalities:
3333
The real advantage of userfaults if compared to regular virtual memory
3434
management of mremap/mprotect is that the userfaults in all their
3535
operations never involve heavyweight structures like vmas (in fact the
36-
``userfaultfd`` runtime load never takes the mmap_sem for writing).
36+
``userfaultfd`` runtime load never takes the mmap_lock for writing).
3737

3838
Vmas are not suitable for page- (or hugepage) granular fault tracking
3939
when dealing with virtual address spaces that could span

Documentation/filesystems/locking.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -615,7 +615,7 @@ prototypes::
615615
locking rules:
616616

617617
============= ======== ===========================
618-
ops mmap_sem PageLocked(page)
618+
ops mmap_lock PageLocked(page)
619619
============= ======== ===========================
620620
open: yes
621621
close: yes

Documentation/vm/hmm.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -191,15 +191,15 @@ The usage pattern is::
191191

192192
again:
193193
range.notifier_seq = mmu_interval_read_begin(&interval_sub);
194-
down_read(&mm->mmap_sem);
194+
mmap_read_lock(mm);
195195
ret = hmm_range_fault(&range);
196196
if (ret) {
197-
up_read(&mm->mmap_sem);
197+
mmap_read_unlock(mm);
198198
if (ret == -EBUSY)
199199
goto again;
200200
return ret;
201201
}
202-
up_read(&mm->mmap_sem);
202+
mmap_read_unlock(mm);
203203

204204
take_lock(driver->update);
205205
if (mmu_interval_read_retry(&ni, range.notifier_seq) {

Documentation/vm/transhuge.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -98,9 +98,9 @@ split_huge_page() or split_huge_pmd() has a cost.
9898

9999
To make pagetable walks huge pmd aware, all you need to do is to call
100100
pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
101-
mmap_sem in read (or write) mode to be sure a huge pmd cannot be
101+
mmap_lock in read (or write) mode to be sure a huge pmd cannot be
102102
created from under you by khugepaged (khugepaged collapse_huge_page
103-
takes the mmap_sem in write mode in addition to the anon_vma lock). If
103+
takes the mmap_lock in write mode in addition to the anon_vma lock). If
104104
pmd_trans_huge returns false, you just fallback in the old code
105105
paths. If instead pmd_trans_huge returns true, you have to take the
106106
page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the

arch/alpha/boot/bootp.c

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616

1717
#include <asm/console.h>
1818
#include <asm/hwrpb.h>
19-
#include <asm/pgtable.h>
2019
#include <asm/io.h>
2120

2221
#include <stdarg.h>

arch/alpha/boot/bootpz.c

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@
1818

1919
#include <asm/console.h>
2020
#include <asm/hwrpb.h>
21-
#include <asm/pgtable.h>
2221
#include <asm/io.h>
2322

2423
#include <stdarg.h>

arch/alpha/boot/main.c

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@
1414

1515
#include <asm/console.h>
1616
#include <asm/hwrpb.h>
17-
#include <asm/pgtable.h>
1817

1918
#include <stdarg.h>
2019

arch/alpha/include/asm/io.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@
77
#include <linux/kernel.h>
88
#include <linux/mm.h>
99
#include <asm/compiler.h>
10-
#include <asm/pgtable.h>
1110
#include <asm/machvec.h>
1211
#include <asm/hwrpb.h>
1312

arch/alpha/include/asm/pgtable.h

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -276,15 +276,6 @@ extern inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) &= ~_PAGE_FOW; return
276276
extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= __DIRTY_BITS; return pte; }
277277
extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; return pte; }
278278

279-
#define PAGE_DIR_OFFSET(tsk,address) pgd_offset((tsk),(address))
280-
281-
/* to find an entry in a kernel page-table-directory */
282-
#define pgd_offset_k(address) pgd_offset(&init_mm, (address))
283-
284-
/* to find an entry in a page-table-directory. */
285-
#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
286-
#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
287-
288279
/*
289280
* The smp_read_barrier_depends() in the following functions are required to
290281
* order the load of *dir (the pointer in the top level page table) with any
@@ -305,6 +296,7 @@ extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address)
305296
smp_read_barrier_depends(); /* see above */
306297
return ret;
307298
}
299+
#define pmd_offset pmd_offset
308300

309301
/* Find an entry in the third-level page table.. */
310302
extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address)
@@ -314,9 +306,7 @@ extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address)
314306
smp_read_barrier_depends(); /* see above */
315307
return ret;
316308
}
317-
318-
#define pte_offset_map(dir,addr) pte_offset_kernel((dir),(addr))
319-
#define pte_unmap(pte) do { } while (0)
309+
#define pte_offset_kernel pte_offset_kernel
320310

321311
extern pgd_t swapper_pg_dir[1024];
322312

@@ -355,8 +345,6 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
355345

356346
extern void paging_init(void);
357347

358-
#include <asm-generic/pgtable.h>
359-
360348
/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */
361349
#define HAVE_ARCH_UNMAPPED_AREA
362350

0 commit comments

Comments
 (0)