Skip to content

Commit 922eea2

Browse files
jbeulichKAGA-KOKO
authored andcommitted
x86/xen/32: Simplify ring check in xen_iret_crit_fixup()
This can be had with two instead of six insns, by just checking the high CS.RPL bit. Also adjust the comment - there would be no #GP in the mentioned cases, as there's no segment limit violation or alike. Instead there'd be #PF, but that one reports the target EIP of said branch, not the address of the branch insn itself. Signed-off-by: Jan Beulich <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 29b810f commit 922eea2

File tree

1 file changed

+4
-11
lines changed

1 file changed

+4
-11
lines changed

arch/x86/xen/xen-asm_32.S

Lines changed: 4 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -153,22 +153,15 @@ hyper_iret:
153153
* it's still on stack), we need to restore its value here.
154154
*/
155155
ENTRY(xen_iret_crit_fixup)
156-
pushl %ecx
157156
/*
158157
* Paranoia: Make sure we're really coming from kernel space.
159158
* One could imagine a case where userspace jumps into the
160159
* critical range address, but just before the CPU delivers a
161-
* GP, it decides to deliver an interrupt instead. Unlikely?
162-
* Definitely. Easy to avoid? Yes. The Intel documents
163-
* explicitly say that the reported EIP for a bad jump is the
164-
* jump instruction itself, not the destination, but some
165-
* virtual environments get this wrong.
160+
* PF, it decides to deliver an interrupt instead. Unlikely?
161+
* Definitely. Easy to avoid? Yes.
166162
*/
167-
movl 3*4(%esp), %ecx /* nested CS */
168-
andl $SEGMENT_RPL_MASK, %ecx
169-
cmpl $USER_RPL, %ecx
170-
popl %ecx
171-
je 2f
163+
testb $2, 2*4(%esp) /* nested CS */
164+
jnz 2f
172165

173166
/*
174167
* If eip is before iret_restore_end then stack

0 commit comments

Comments
 (0)