Skip to content

Commit 67de8dc

Browse files
amlutosuryasaimadhu
authored andcommitted
x86/mmx: Use KFPU_387 for MMX string operations
The default kernel_fpu_begin() doesn't work on systems that support XMM but haven't yet enabled CR4.OSFXSR. This causes crashes when _mmx_memcpy() is called too early because LDMXCSR generates #UD when the aforementioned bit is clear. Fix it by using kernel_fpu_begin_mask(KFPU_387) explicitly. Fixes: 7ad8167 ("x86/fpu: Reset MXCSR to default in kernel_fpu_begin()") Reported-by: Krzysztof Mazur <[email protected]> Signed-off-by: Andy Lutomirski <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Tested-by: Krzysztof Piotr Olędzki <[email protected]> Tested-by: Krzysztof Mazur <[email protected]> Cc: <[email protected]> Link: https://lkml.kernel.org/r/e7bf21855fe99e5f3baa27446e32623358f69e8d.1611205691.git.luto@kernel.org
1 parent e451228 commit 67de8dc

File tree

1 file changed

+15
-5
lines changed

1 file changed

+15
-5
lines changed

arch/x86/lib/mmx_32.c

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,16 @@
2626
#include <asm/fpu/api.h>
2727
#include <asm/asm.h>
2828

29+
/*
30+
* Use KFPU_387. MMX instructions are not affected by MXCSR,
31+
* but both AMD and Intel documentation states that even integer MMX
32+
* operations will result in #MF if an exception is pending in FCW.
33+
*
34+
* EMMS is not needed afterwards because, after calling kernel_fpu_end(),
35+
* any subsequent user of the 387 stack will reinitialize it using
36+
* KFPU_387.
37+
*/
38+
2939
void *_mmx_memcpy(void *to, const void *from, size_t len)
3040
{
3141
void *p;
@@ -37,7 +47,7 @@ void *_mmx_memcpy(void *to, const void *from, size_t len)
3747
p = to;
3848
i = len >> 6; /* len/64 */
3949

40-
kernel_fpu_begin();
50+
kernel_fpu_begin_mask(KFPU_387);
4151

4252
__asm__ __volatile__ (
4353
"1: prefetch (%0)\n" /* This set is 28 bytes */
@@ -127,7 +137,7 @@ static void fast_clear_page(void *page)
127137
{
128138
int i;
129139

130-
kernel_fpu_begin();
140+
kernel_fpu_begin_mask(KFPU_387);
131141

132142
__asm__ __volatile__ (
133143
" pxor %%mm0, %%mm0\n" : :
@@ -160,7 +170,7 @@ static void fast_copy_page(void *to, void *from)
160170
{
161171
int i;
162172

163-
kernel_fpu_begin();
173+
kernel_fpu_begin_mask(KFPU_387);
164174

165175
/*
166176
* maybe the prefetch stuff can go before the expensive fnsave...
@@ -247,7 +257,7 @@ static void fast_clear_page(void *page)
247257
{
248258
int i;
249259

250-
kernel_fpu_begin();
260+
kernel_fpu_begin_mask(KFPU_387);
251261

252262
__asm__ __volatile__ (
253263
" pxor %%mm0, %%mm0\n" : :
@@ -282,7 +292,7 @@ static void fast_copy_page(void *to, void *from)
282292
{
283293
int i;
284294

285-
kernel_fpu_begin();
295+
kernel_fpu_begin_mask(KFPU_387);
286296

287297
__asm__ __volatile__ (
288298
"1: prefetch (%0)\n"

0 commit comments

Comments
 (0)