Missed optimization: store i1 %0, ptr @a, align 1 --> store i1 true, ptr @a, align 1
@a = external global i1
define i1 @src(ptr %G) {
BB:
%L = load i1, ptr %G, align 1
%0 = xor i1 %L, true
store i1 %0, ptr @a, align 1
call void @llvm.assume(i1 %0)
ret i1 %0
}
clang-trunk -O3 generates:
define noundef i1 @src(ptr readonly captures(none) %G) local_unnamed_addr #0 {
%L = load i1, ptr %G, align 1
%0 = xor i1 %L, true
store i1 %0, ptr @a, align 1
tail call void @llvm.assume(i1 %0)
ret i1 true
}
We found that scalar-evolution looked like it could do some analysis with llvm.assume (it found that the value of %L was false), but the optimization was still missed:
Printing analysis 'Scalar Evolution Analysis' for function 'src':
Classifying expressions for: @src
%L = load i1, ptr %G, align 1
--> %L U: [0,-1) S: [0,-1)
Godbolt: https://godbolt.org/z/bd9fEGe18
Alive2 proof: https://alive2.llvm.org/ce/z/AYhVHx
The reduced IR is derived from https://github.com/torvalds/linux/blob/f4d2ef48250ad057e4f00087967b5ff366da9f39/mm/page_alloc.c#L2201. We added assume above to restrict the variable's value and debug.