-
Notifications
You must be signed in to change notification settings - Fork 15.2k
Closed as not planned
Labels
llvmUmbrella label for LLVM issuesUmbrella label for LLVM issuesmissed-optimizationquestionA question, not bug report. Check out https://llvm.org/docs/GettingInvolved.html instead!A question, not bug report. Check out https://llvm.org/docs/GettingInvolved.html instead!
Description
When a memory clobber is used in inline assembly, restrict pointers loads are incorrectly reordered after the inline assembly. GCC inline assembly documentation states
Further, the compiler does not assume that any values read from memory before an asm remain unchanged after that asm
Therefore it is invalid to assume that values pointed by restrict pointers are unchanged if they are not passed as input operands.
Example C Code
#include <stdint.h>
void ipc_copy(uint64_t p, uint64_t *restrict a, uint64_t *restrict b) {
uint64_t s0 = a[0];
uint64_t s1 = a[1];
uint64_t s2 = a[2];
uint64_t s3 = a[3];
__asm__ __volatile__("mov %0, %%cr3" :: "r"(p) : "memory");
b[0] = s0;
b[1] = s1;
b[2] = s2;
b[3] = s3;
}Clang
ipc_copy: # @ipc_copy
mov cr3, rdi
movups xmm0, xmmword ptr [rsi]
movups xmmword ptr [rdx], xmm0
movups xmm0, xmmword ptr [rsi + 16]
movups xmmword ptr [rdx + 16], xmm0
retExpected (Clang 14.0.0 / GCC)
ipc_copy: # @ipc_copy
movups xmm0, xmmword ptr [rsi]
movups xmm1, xmmword ptr [rsi + 16]
mov cr3, rdi
movups xmmword ptr [rdx], xmm0
movups xmmword ptr [rdx + 16], xmm1
retRelated: #15867
Metadata
Metadata
Assignees
Labels
llvmUmbrella label for LLVM issuesUmbrella label for LLVM issuesmissed-optimizationquestionA question, not bug report. Check out https://llvm.org/docs/GettingInvolved.html instead!A question, not bug report. Check out https://llvm.org/docs/GettingInvolved.html instead!