Skip to content

Commit 5168f6b

Browse files
benzeajmberg-intel
authored andcommitted
um: Do not flush MM in flush_thread
There should be no need to flush the memory in flush_thread. Doing this likely worked around some issue where memory was still incorrectly mapped when creating or cloning an MM. With the removal of the special clone path, that isn't relevant anymore. However, add the flush into MM initialization so that any new userspace MM is guaranteed to be clean. Signed-off-by: Benjamin Berg <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Johannes Berg <[email protected]>
1 parent 3c83170 commit 5168f6b

File tree

2 files changed

+24
-5
lines changed

2 files changed

+24
-5
lines changed

arch/um/kernel/exec.c

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ void flush_thread(void)
2424
{
2525
arch_flush_thread(&current->thread.arch);
2626

27-
unmap(&current->mm->context.id, 0, TASK_SIZE);
28-
if (syscall_stub_flush(&current->mm->context.id) < 0) {
29-
printk(KERN_ERR "%s - clearing address space failed", __func__);
30-
force_sig(SIGKILL);
31-
}
3227
get_safe_registers(current_pt_regs()->regs.gp,
3328
current_pt_regs()->regs.fp);
3429

arch/um/kernel/skas/mmu.c

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,30 @@ int init_new_context(struct task_struct *task, struct mm_struct *mm)
4040
goto out_free;
4141
}
4242

43+
/*
44+
* Ensure the new MM is clean and nothing unwanted is mapped.
45+
*
46+
* TODO: We should clear the memory up to STUB_START to ensure there is
47+
* nothing mapped there, i.e. we (currently) have:
48+
*
49+
* |- user memory -|- unused -|- stub -|- unused -|
50+
* ^ TASK_SIZE ^ STUB_START
51+
*
52+
* Meaning we have two unused areas where we may still have valid
53+
* mappings from our internal clone(). That isn't really a problem as
54+
* userspace is not going to access them, but it is definitely not
55+
* correct.
56+
*
57+
* However, we are "lucky" and if rseq is configured, then on 32 bit
58+
* it will fall into the first empty range while on 64 bit it is going
59+
* to use an anonymous mapping in the second range. As such, things
60+
* continue to work for now as long as we don't start unmapping these
61+
* areas.
62+
*
63+
* Change this to STUB_START once we have a clean userspace.
64+
*/
65+
unmap(new_id, 0, TASK_SIZE);
66+
4367
return 0;
4468

4569
out_free:

0 commit comments

Comments
 (0)