Skip to content

Conversation

@janagor
Copy link
Contributor

@janagor janagor commented Mar 28, 2025

Fixes #131476

In EmitAtomicOp for AtomicLoad case added !range metadata when encountered boolean type. This results in in generation trunc nuw instruction instead of trunc.

In RISCVTargetLowering::RISCVTargetLowering made extending of atomic loads legal.

@github-actions
Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot llvmbot added clang Clang issues not falling into any other category backend:RISC-V clang:codegen IR generation bugs: mangling, exceptions, etc. labels Mar 28, 2025
@llvmbot
Copy link
Member

llvmbot commented Mar 28, 2025

@llvm/pr-subscribers-clang

Author: Jan Górski (janagor)

Changes

Fixes #131476

In EmitAtomicOp for AtomicLoad case added !range metadata when encountered boolean type. This results in in generation trunc nuw instruction instead of trunc.

In RISCVTargetLowering::RISCVTargetLowering made extending of atomic loads legal.


Patch is 44.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/133406.diff

4 Files Affected:

  • (modified) clang/lib/CodeGen/CGAtomic.cpp (+23)
  • (added) clang/test/CodeGen/atomic-ops-load.c (+11)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+9)
  • (added) llvm/test/CodeGen/RISCV/atomic-load-zext.ll (+1221)
diff --git a/clang/lib/CodeGen/CGAtomic.cpp b/clang/lib/CodeGen/CGAtomic.cpp
index 3adb2a7ad207f..70ae7180a9adc 100644
--- a/clang/lib/CodeGen/CGAtomic.cpp
+++ b/clang/lib/CodeGen/CGAtomic.cpp
@@ -590,6 +590,29 @@ static void EmitAtomicOp(CodeGenFunction &CGF, AtomicExpr *E, Address Dest,
     llvm::LoadInst *Load = CGF.Builder.CreateLoad(Ptr);
     Load->setAtomic(Order, Scope);
     Load->setVolatile(E->isVolatile());
+
+    if (CGF.CGM.getCodeGenOpts().OptimizationLevel == 0) {
+      CGF.Builder.CreateStore(Load, Dest);
+      return;
+    }
+
+    QualType Ty = E->getValueType();
+    if (!Ty->isBooleanType()) {
+      CGF.Builder.CreateStore(Load, Dest);
+      return;
+    }
+
+    llvm::MDBuilder MDHelper(CGF.getLLVMContext());
+    llvm::APInt BooleanMin = llvm::APInt(CGF.getContext().getTypeSize(Ty), 0);
+    llvm::APInt BooleanEnd = llvm::APInt(CGF.getContext().getTypeSize(Ty), 2);
+
+    if (llvm::MDNode *RangeInfo =
+        MDHelper.createRange(BooleanMin, BooleanEnd)) {
+      Load->setMetadata(llvm::LLVMContext::MD_range, RangeInfo);
+      Load->setMetadata(llvm::LLVMContext::MD_noundef,
+          llvm::MDNode::get(CGF.getLLVMContext(), {}));
+    }
+
     CGF.Builder.CreateStore(Load, Dest);
     return;
   }
diff --git a/clang/test/CodeGen/atomic-ops-load.c b/clang/test/CodeGen/atomic-ops-load.c
new file mode 100644
index 0000000000000..778a7ebdc2618
--- /dev/null
+++ b/clang/test/CodeGen/atomic-ops-load.c
@@ -0,0 +1,11 @@
+// RUN: %clang_cc1 -triple riscv64 -O1 -emit-llvm %s -o - | FileCheck %s
+#include <stdbool.h>
+
+extern bool t1;
+bool test1(void) {
+// CHECK-LABEL: define{{.*}} i1 @test1
+// CHECK: load atomic i8, ptr @t1 monotonic, align 1, !range ![[$WS_RANGE:[0-9]*]], !noundef !{{[0-9]+}}
+// CHECK-NEXT: trunc nuw i8 %{{.*}} to i1
+// CHECK-NEXT: ret i1 %{{.*}}
+  return __atomic_load_n(&t1, __ATOMIC_RELAXED);
+}
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 752fed3d479a2..21279c66e32e9 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -288,6 +288,15 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
   setOperationAction(ISD::BRCOND, MVT::Other, Custom);
   setOperationAction(ISD::SELECT_CC, XLenVT, Expand);
 
+  // Mark sign/zero extending atomic loads as legal, which will make DAGCombiner
+  // fold extensions into atomic loads if possible.
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i64,
+                         {MVT::i8, MVT::i16, MVT::i32}, Legal);
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i32,
+                         {MVT::i8, MVT::i16}, Legal);
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i16, MVT::i8,
+                         Legal);
+
   setCondCodeAction(ISD::SETGT, XLenVT, Custom);
   setCondCodeAction(ISD::SETGE, XLenVT, Expand);
   setCondCodeAction(ISD::SETUGT, XLenVT, Custom);
diff --git a/llvm/test/CodeGen/RISCV/atomic-load-zext.ll b/llvm/test/CodeGen/RISCV/atomic-load-zext.ll
new file mode 100644
index 0000000000000..fa140cd26ffa0
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/atomic-load-zext.ll
@@ -0,0 +1,1221 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv32 -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefix=RV32I %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-WMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+ztso,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-TSO %s
+; RUN: llc -mtriple=riscv64 -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefix=RV64I %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-WMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+ztso,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-TSO %s
+
+
+; RUN: llc -mtriple=riscv32 -mattr=+a -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-WMO-TRAILING-FENCE %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-TSO-TRAILING-FENCE %s
+
+; RUN: llc -mtriple=riscv64 -mattr=+a -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-WMO-TRAILING-FENCE %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-TSO-TRAILING-FENCE %s
+
+; RUN: llc -mtriple=riscv32 -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-WMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-TSO %s
+
+; RUN: llc -mtriple=riscv64 -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-WMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-TSO %s
+
+define zeroext i1 @atomic_load_i1_unordered(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_unordered:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i1_unordered:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_unordered:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i1_unordered:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a unordered, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_monotonic(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_monotonic:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i1_monotonic:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_monotonic:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i1_monotonic:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a monotonic, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_acquire(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_acquire:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 2
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i1_acquire:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i1_acquire:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_acquire:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 2
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i1_acquire:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i1_acquire:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i1_acquire:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i1_acquire:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a acquire, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_seq_cst(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_seq_cst:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 5
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    fence rw, rw
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    fence rw, rw
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_seq_cst:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 5
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    fence rw, rw
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    fence rw, rw
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a seq_cst, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i8 @atomic_load_i8_unordered(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_unordered:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i8_unordered:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_unordered:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i8_unordered:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a unordered, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_monotonic(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_monotonic:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i8_monotonic:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_monotonic:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i8_monotonic:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a monotonic, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_acquire(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_acquire:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 2
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i8_acquire:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i8_acquire:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_acquire:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 2
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i8_acquire:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i8_acquire:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i8_acquire:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i8_acquire:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a acquire, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_seq_cst(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_seq_cst:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 5
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    fence rw, rw
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    fence rw, rw
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_seq_cst:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 5
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i8_seq_cst:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    fence rw, rw
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i8_seq_cst:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    fence rw, rw
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-TSO-TRAILING-FENCE: ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Mar 28, 2025

@llvm/pr-subscribers-backend-risc-v

Author: Jan Górski (janagor)

Changes

Fixes #131476

In EmitAtomicOp for AtomicLoad case added !range metadata when encountered boolean type. This results in in generation trunc nuw instruction instead of trunc.

In RISCVTargetLowering::RISCVTargetLowering made extending of atomic loads legal.


Patch is 44.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/133406.diff

4 Files Affected:

  • (modified) clang/lib/CodeGen/CGAtomic.cpp (+23)
  • (added) clang/test/CodeGen/atomic-ops-load.c (+11)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+9)
  • (added) llvm/test/CodeGen/RISCV/atomic-load-zext.ll (+1221)
diff --git a/clang/lib/CodeGen/CGAtomic.cpp b/clang/lib/CodeGen/CGAtomic.cpp
index 3adb2a7ad207f..70ae7180a9adc 100644
--- a/clang/lib/CodeGen/CGAtomic.cpp
+++ b/clang/lib/CodeGen/CGAtomic.cpp
@@ -590,6 +590,29 @@ static void EmitAtomicOp(CodeGenFunction &CGF, AtomicExpr *E, Address Dest,
     llvm::LoadInst *Load = CGF.Builder.CreateLoad(Ptr);
     Load->setAtomic(Order, Scope);
     Load->setVolatile(E->isVolatile());
+
+    if (CGF.CGM.getCodeGenOpts().OptimizationLevel == 0) {
+      CGF.Builder.CreateStore(Load, Dest);
+      return;
+    }
+
+    QualType Ty = E->getValueType();
+    if (!Ty->isBooleanType()) {
+      CGF.Builder.CreateStore(Load, Dest);
+      return;
+    }
+
+    llvm::MDBuilder MDHelper(CGF.getLLVMContext());
+    llvm::APInt BooleanMin = llvm::APInt(CGF.getContext().getTypeSize(Ty), 0);
+    llvm::APInt BooleanEnd = llvm::APInt(CGF.getContext().getTypeSize(Ty), 2);
+
+    if (llvm::MDNode *RangeInfo =
+        MDHelper.createRange(BooleanMin, BooleanEnd)) {
+      Load->setMetadata(llvm::LLVMContext::MD_range, RangeInfo);
+      Load->setMetadata(llvm::LLVMContext::MD_noundef,
+          llvm::MDNode::get(CGF.getLLVMContext(), {}));
+    }
+
     CGF.Builder.CreateStore(Load, Dest);
     return;
   }
diff --git a/clang/test/CodeGen/atomic-ops-load.c b/clang/test/CodeGen/atomic-ops-load.c
new file mode 100644
index 0000000000000..778a7ebdc2618
--- /dev/null
+++ b/clang/test/CodeGen/atomic-ops-load.c
@@ -0,0 +1,11 @@
+// RUN: %clang_cc1 -triple riscv64 -O1 -emit-llvm %s -o - | FileCheck %s
+#include <stdbool.h>
+
+extern bool t1;
+bool test1(void) {
+// CHECK-LABEL: define{{.*}} i1 @test1
+// CHECK: load atomic i8, ptr @t1 monotonic, align 1, !range ![[$WS_RANGE:[0-9]*]], !noundef !{{[0-9]+}}
+// CHECK-NEXT: trunc nuw i8 %{{.*}} to i1
+// CHECK-NEXT: ret i1 %{{.*}}
+  return __atomic_load_n(&t1, __ATOMIC_RELAXED);
+}
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 752fed3d479a2..21279c66e32e9 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -288,6 +288,15 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
   setOperationAction(ISD::BRCOND, MVT::Other, Custom);
   setOperationAction(ISD::SELECT_CC, XLenVT, Expand);
 
+  // Mark sign/zero extending atomic loads as legal, which will make DAGCombiner
+  // fold extensions into atomic loads if possible.
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i64,
+                         {MVT::i8, MVT::i16, MVT::i32}, Legal);
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i32,
+                         {MVT::i8, MVT::i16}, Legal);
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i16, MVT::i8,
+                         Legal);
+
   setCondCodeAction(ISD::SETGT, XLenVT, Custom);
   setCondCodeAction(ISD::SETGE, XLenVT, Expand);
   setCondCodeAction(ISD::SETUGT, XLenVT, Custom);
diff --git a/llvm/test/CodeGen/RISCV/atomic-load-zext.ll b/llvm/test/CodeGen/RISCV/atomic-load-zext.ll
new file mode 100644
index 0000000000000..fa140cd26ffa0
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/atomic-load-zext.ll
@@ -0,0 +1,1221 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv32 -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefix=RV32I %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-WMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+ztso,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-TSO %s
+; RUN: llc -mtriple=riscv64 -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefix=RV64I %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-WMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+ztso,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-TSO %s
+
+
+; RUN: llc -mtriple=riscv32 -mattr=+a -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-WMO-TRAILING-FENCE %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-TSO-TRAILING-FENCE %s
+
+; RUN: llc -mtriple=riscv64 -mattr=+a -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-WMO-TRAILING-FENCE %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-TSO-TRAILING-FENCE %s
+
+; RUN: llc -mtriple=riscv32 -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-WMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-TSO %s
+
+; RUN: llc -mtriple=riscv64 -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-WMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-TSO %s
+
+define zeroext i1 @atomic_load_i1_unordered(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_unordered:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i1_unordered:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_unordered:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i1_unordered:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a unordered, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_monotonic(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_monotonic:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i1_monotonic:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_monotonic:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i1_monotonic:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a monotonic, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_acquire(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_acquire:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 2
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i1_acquire:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i1_acquire:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_acquire:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 2
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i1_acquire:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i1_acquire:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i1_acquire:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i1_acquire:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a acquire, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_seq_cst(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_seq_cst:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 5
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    fence rw, rw
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    fence rw, rw
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_seq_cst:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 5
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    fence rw, rw
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    fence rw, rw
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a seq_cst, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i8 @atomic_load_i8_unordered(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_unordered:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i8_unordered:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_unordered:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i8_unordered:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a unordered, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_monotonic(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_monotonic:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i8_monotonic:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_monotonic:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i8_monotonic:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a monotonic, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_acquire(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_acquire:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 2
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i8_acquire:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i8_acquire:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_acquire:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 2
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i8_acquire:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i8_acquire:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i8_acquire:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i8_acquire:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a acquire, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_seq_cst(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_seq_cst:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 5
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    fence rw, rw
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    fence rw, rw
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_seq_cst:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 5
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i8_seq_cst:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    fence rw, rw
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i8_seq_cst:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    fence rw, rw
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-TSO-TRAILING-FENCE: ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Mar 28, 2025

@llvm/pr-subscribers-clang-codegen

Author: Jan Górski (janagor)

Changes

Fixes #131476

In EmitAtomicOp for AtomicLoad case added !range metadata when encountered boolean type. This results in in generation trunc nuw instruction instead of trunc.

In RISCVTargetLowering::RISCVTargetLowering made extending of atomic loads legal.


Patch is 44.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/133406.diff

4 Files Affected:

  • (modified) clang/lib/CodeGen/CGAtomic.cpp (+23)
  • (added) clang/test/CodeGen/atomic-ops-load.c (+11)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+9)
  • (added) llvm/test/CodeGen/RISCV/atomic-load-zext.ll (+1221)
diff --git a/clang/lib/CodeGen/CGAtomic.cpp b/clang/lib/CodeGen/CGAtomic.cpp
index 3adb2a7ad207f..70ae7180a9adc 100644
--- a/clang/lib/CodeGen/CGAtomic.cpp
+++ b/clang/lib/CodeGen/CGAtomic.cpp
@@ -590,6 +590,29 @@ static void EmitAtomicOp(CodeGenFunction &CGF, AtomicExpr *E, Address Dest,
     llvm::LoadInst *Load = CGF.Builder.CreateLoad(Ptr);
     Load->setAtomic(Order, Scope);
     Load->setVolatile(E->isVolatile());
+
+    if (CGF.CGM.getCodeGenOpts().OptimizationLevel == 0) {
+      CGF.Builder.CreateStore(Load, Dest);
+      return;
+    }
+
+    QualType Ty = E->getValueType();
+    if (!Ty->isBooleanType()) {
+      CGF.Builder.CreateStore(Load, Dest);
+      return;
+    }
+
+    llvm::MDBuilder MDHelper(CGF.getLLVMContext());
+    llvm::APInt BooleanMin = llvm::APInt(CGF.getContext().getTypeSize(Ty), 0);
+    llvm::APInt BooleanEnd = llvm::APInt(CGF.getContext().getTypeSize(Ty), 2);
+
+    if (llvm::MDNode *RangeInfo =
+        MDHelper.createRange(BooleanMin, BooleanEnd)) {
+      Load->setMetadata(llvm::LLVMContext::MD_range, RangeInfo);
+      Load->setMetadata(llvm::LLVMContext::MD_noundef,
+          llvm::MDNode::get(CGF.getLLVMContext(), {}));
+    }
+
     CGF.Builder.CreateStore(Load, Dest);
     return;
   }
diff --git a/clang/test/CodeGen/atomic-ops-load.c b/clang/test/CodeGen/atomic-ops-load.c
new file mode 100644
index 0000000000000..778a7ebdc2618
--- /dev/null
+++ b/clang/test/CodeGen/atomic-ops-load.c
@@ -0,0 +1,11 @@
+// RUN: %clang_cc1 -triple riscv64 -O1 -emit-llvm %s -o - | FileCheck %s
+#include <stdbool.h>
+
+extern bool t1;
+bool test1(void) {
+// CHECK-LABEL: define{{.*}} i1 @test1
+// CHECK: load atomic i8, ptr @t1 monotonic, align 1, !range ![[$WS_RANGE:[0-9]*]], !noundef !{{[0-9]+}}
+// CHECK-NEXT: trunc nuw i8 %{{.*}} to i1
+// CHECK-NEXT: ret i1 %{{.*}}
+  return __atomic_load_n(&t1, __ATOMIC_RELAXED);
+}
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 752fed3d479a2..21279c66e32e9 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -288,6 +288,15 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
   setOperationAction(ISD::BRCOND, MVT::Other, Custom);
   setOperationAction(ISD::SELECT_CC, XLenVT, Expand);
 
+  // Mark sign/zero extending atomic loads as legal, which will make DAGCombiner
+  // fold extensions into atomic loads if possible.
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i64,
+                         {MVT::i8, MVT::i16, MVT::i32}, Legal);
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i32,
+                         {MVT::i8, MVT::i16}, Legal);
+  setAtomicLoadExtAction({ISD::SEXTLOAD, ISD::ZEXTLOAD}, MVT::i16, MVT::i8,
+                         Legal);
+
   setCondCodeAction(ISD::SETGT, XLenVT, Custom);
   setCondCodeAction(ISD::SETGE, XLenVT, Expand);
   setCondCodeAction(ISD::SETUGT, XLenVT, Custom);
diff --git a/llvm/test/CodeGen/RISCV/atomic-load-zext.ll b/llvm/test/CodeGen/RISCV/atomic-load-zext.ll
new file mode 100644
index 0000000000000..fa140cd26ffa0
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/atomic-load-zext.ll
@@ -0,0 +1,1221 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv32 -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefix=RV32I %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-WMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+ztso,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-TSO %s
+; RUN: llc -mtriple=riscv64 -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefix=RV64I %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-WMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+ztso,+no-trailing-seq-cst-fence -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-TSO %s
+
+
+; RUN: llc -mtriple=riscv32 -mattr=+a -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-WMO-TRAILING-FENCE %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-TSO-TRAILING-FENCE %s
+
+; RUN: llc -mtriple=riscv64 -mattr=+a -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-WMO-TRAILING-FENCE %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-TSO-TRAILING-FENCE %s
+
+; RUN: llc -mtriple=riscv32 -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-WMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-TSO %s
+
+; RUN: llc -mtriple=riscv64 -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-WMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \
+; RUN:   | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-TSO %s
+
+define zeroext i1 @atomic_load_i1_unordered(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_unordered:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i1_unordered:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_unordered:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i1_unordered:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a unordered, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_monotonic(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_monotonic:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i1_monotonic:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_monotonic:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i1_monotonic:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a monotonic, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_acquire(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_acquire:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 2
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i1_acquire:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i1_acquire:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_acquire:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 2
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i1_acquire:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i1_acquire:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_acquire:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i1_acquire:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i1_acquire:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a acquire, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i1 @atomic_load_i1_seq_cst(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i1_seq_cst:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 5
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 1
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    fence rw, rw
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    fence rw, rw
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i1_seq_cst:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 5
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 1
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    fence rw, rw
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    fence rw, rw
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i1_seq_cst:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i1_seq_cst:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a seq_cst, align 1
+  %2 = trunc nuw i8 %1 to i1
+  ret i1 %2
+}
+
+define zeroext i8 @atomic_load_i8_unordered(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_unordered:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i8_unordered:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_unordered:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i8_unordered:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a unordered, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_monotonic(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_monotonic:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 0
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-LABEL: atomic_load_i8_monotonic:
+; RV32IA:       # %bb.0:
+; RV32IA-NEXT:    lbu a0, 0(a0)
+; RV32IA-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_monotonic:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 0
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-LABEL: atomic_load_i8_monotonic:
+; RV64IA:       # %bb.0:
+; RV64IA-NEXT:    lbu a0, 0(a0)
+; RV64IA-NEXT:    ret
+  %1 = load atomic i8, ptr %a monotonic, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_acquire(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_acquire:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 2
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i8_acquire:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i8_acquire:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_acquire:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 2
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i8_acquire:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i8_acquire:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV32IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV64IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV64IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_acquire:
+; RV64IA-TSO-TRAILING-FENCE:       # %bb.0:
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-ZALASR-LABEL: atomic_load_i8_acquire:
+; RV32IA-ZALASR:       # %bb.0:
+; RV32IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV32IA-ZALASR-NEXT:    ret
+;
+; RV64IA-ZALASR-LABEL: atomic_load_i8_acquire:
+; RV64IA-ZALASR:       # %bb.0:
+; RV64IA-ZALASR-NEXT:    lbu a0, 0(a0)
+; RV64IA-ZALASR-NEXT:    ret
+  %1 = load atomic i8, ptr %a acquire, align 1
+  ret i8 %1
+}
+
+define zeroext i8 @atomic_load_i8_seq_cst(ptr %a) nounwind {
+; RV32I-LABEL: atomic_load_i8_seq_cst:
+; RV32I:       # %bb.0:
+; RV32I-NEXT:    addi sp, sp, -16
+; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
+; RV32I-NEXT:    li a1, 5
+; RV32I-NEXT:    call __atomic_load_1
+; RV32I-NEXT:    andi a0, a0, 255
+; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
+; RV32I-NEXT:    addi sp, sp, 16
+; RV32I-NEXT:    ret
+;
+; RV32IA-WMO-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-WMO:       # %bb.0:
+; RV32IA-WMO-NEXT:    fence rw, rw
+; RV32IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-NEXT:    fence r, rw
+; RV32IA-WMO-NEXT:    ret
+;
+; RV32IA-TSO-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-TSO:       # %bb.0:
+; RV32IA-TSO-NEXT:    fence rw, rw
+; RV32IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV32IA-TSO-NEXT:    ret
+;
+; RV64I-LABEL: atomic_load_i8_seq_cst:
+; RV64I:       # %bb.0:
+; RV64I-NEXT:    addi sp, sp, -16
+; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
+; RV64I-NEXT:    li a1, 5
+; RV64I-NEXT:    call __atomic_load_1
+; RV64I-NEXT:    andi a0, a0, 255
+; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
+; RV64I-NEXT:    addi sp, sp, 16
+; RV64I-NEXT:    ret
+;
+; RV64IA-WMO-LABEL: atomic_load_i8_seq_cst:
+; RV64IA-WMO:       # %bb.0:
+; RV64IA-WMO-NEXT:    fence rw, rw
+; RV64IA-WMO-NEXT:    lbu a0, 0(a0)
+; RV64IA-WMO-NEXT:    fence r, rw
+; RV64IA-WMO-NEXT:    ret
+;
+; RV64IA-TSO-LABEL: atomic_load_i8_seq_cst:
+; RV64IA-TSO:       # %bb.0:
+; RV64IA-TSO-NEXT:    fence rw, rw
+; RV64IA-TSO-NEXT:    lbu a0, 0(a0)
+; RV64IA-TSO-NEXT:    ret
+;
+; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-WMO-TRAILING-FENCE:       # %bb.0:
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence rw, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    lbu a0, 0(a0)
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    fence r, rw
+; RV32IA-WMO-TRAILING-FENCE-NEXT:    ret
+;
+; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_i8_seq_cst:
+; RV32IA-TSO-TRAILING-FENCE: ...
[truncated]

@topperc
Copy link
Collaborator

topperc commented Mar 28, 2025

This should be two different PRs. One for clang and one for the RISCV backend. The reviewers for each area are different.

@janagor
Copy link
Contributor Author

janagor commented Mar 28, 2025

Ahh, I see, thanks for info. I will close this this PR and split into two different PR's.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend:RISC-V clang:codegen IR generation bugs: mangling, exceptions, etc. clang Clang issues not falling into any other category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[RISC-V] extra andi instruction when loading a bool atomically

3 participants