Skip to content

Conversation

@tschuett
Copy link

Please continue padding merge values.

ag G_MERGE_VALUES llvm/test/CodeGen/AArch64/GlobalISel

Known bits for G_SBFX are unsound.

  • KnownBits ShiftKnown = KnownBits::sub(ExtKnown, WidthKnown);
  • WidthKnown.getBitWidth() is not guaranteed to be BitWidth.
 getActionDefinitionsBuilder({G_SBFX, G_UBFX})
      .legalFor({{S32, S32}, {S64, S32}})
      .clampScalar(1, S32, S32)
      .clampScalar(0, S32, S64)
      .widenScalarToNextPow2(0)
      .scalarize(0);

Wait for it, known bits in the AMDGPUPostLegalizerCombiner crashed.

Fixes #113067

Please continue padding merge values.

ag G_MERGE_VALUES llvm/test/CodeGen/AArch64/GlobalISel

Known bits for G_SBFX are unsound.
* KnownBits ShiftKnown = KnownBits::sub(ExtKnown, WidthKnown);
* WidthKnown.getBitWidth() is not guaranteed to be BitWidth.

```cpp
 getActionDefinitionsBuilder({G_SBFX, G_UBFX})
      .legalFor({{S32, S32}, {S64, S32}})
      .clampScalar(1, S32, S32)
      .clampScalar(0, S32, S64)
      .widenScalarToNextPow2(0)
      .scalarize(0);
```

Wait for it, known bits in the AMDGPUPostLegalizerCombiner crashed.

Fixes llvm#113067
@llvmbot
Copy link
Member

llvmbot commented Oct 22, 2024

@llvm/pr-subscribers-llvm-globalisel

@llvm/pr-subscribers-backend-aarch64

Author: Thorsten Schütt (tschuett)

Changes

Please continue padding merge values.

ag G_MERGE_VALUES llvm/test/CodeGen/AArch64/GlobalISel

Known bits for G_SBFX are unsound.

  • KnownBits ShiftKnown = KnownBits::sub(ExtKnown, WidthKnown);
  • WidthKnown.getBitWidth() is not guaranteed to be BitWidth.
 getActionDefinitionsBuilder({G_SBFX, G_UBFX})
      .legalFor({{S32, S32}, {S64, S32}})
      .clampScalar(1, S32, S32)
      .clampScalar(0, S32, S64)
      .widenScalarToNextPow2(0)
      .scalarize(0);

Wait for it, known bits in the AMDGPUPostLegalizerCombiner crashed.

Fixes #113067


Patch is 75.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/113381.diff

10 Files Affected:

  • (modified) llvm/include/llvm/Target/GlobalISel/Combine.td (+2-1)
  • (modified) llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp (+5-2)
  • (modified) llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp (+10-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir (+9-16)
  • (modified) llvm/test/CodeGen/AArch64/bswap.ll (+1-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll (+98-113)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/lshr.ll (+71-150)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sext_inreg.ll (+57-95)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/shl.ll (+67-86)
  • (modified) llvm/test/CodeGen/AMDGPU/fptoi.i128.ll (+134-216)
diff --git a/llvm/include/llvm/Target/GlobalISel/Combine.td b/llvm/include/llvm/Target/GlobalISel/Combine.td
index ead4149fc11068..257322f985b530 100644
--- a/llvm/include/llvm/Target/GlobalISel/Combine.td
+++ b/llvm/include/llvm/Target/GlobalISel/Combine.td
@@ -420,7 +420,8 @@ def unary_undef_to_zero: GICombineRule<
 // replaced with undef.
 def propagate_undef_any_op: GICombineRule<
   (defs root:$root),
-  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST, G_ANYEXT):$root,
+  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST,
+                           G_ANYEXT, G_MERGE_VALUES):$root,
          [{ return Helper.matchAnyExplicitUseIsUndef(*${root}); }]),
   (apply [{ Helper.replaceInstWithUndef(*${root}); }])>;
 
diff --git a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
index b7ddf9f479ef8e..397023070aceea 100644
--- a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
@@ -2935,8 +2935,11 @@ void CombinerHelper::replaceInstWithFConstant(MachineInstr &MI,
 
 void CombinerHelper::replaceInstWithUndef(MachineInstr &MI) {
   assert(MI.getNumDefs() == 1 && "Expected only one def?");
-  Builder.buildUndef(MI.getOperand(0));
-  MI.eraseFromParent();
+  if (isLegalOrBeforeLegalizer({TargetOpcode::G_IMPLICIT_DEF,
+                                {MRI.getType(MI.getOperand(0).getReg())}})) {
+    Builder.buildUndef(MI.getOperand(0));
+    MI.eraseFromParent();
+  }
 }
 
 bool CombinerHelper::matchSimplifyAddToSub(
diff --git a/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp b/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
index 2c98b129a1a892..7a8bd4d7912ed6 100644
--- a/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
@@ -159,16 +159,17 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
   }
 #endif
 
+  unsigned BitWidth = DstTy.getScalarSizeInBits();
+
   // Handle the case where this is called on a register that does not have a
   // type constraint (i.e. it has a register class constraint instead). This is
   // unlikely to occur except by looking through copies but it is possible for
   // the initial register being queried to be in this state.
   if (!DstTy.isValid()) {
-    Known = KnownBits();
+    Known = KnownBits(BitWidth); // Don't know anything
     return;
   }
 
-  unsigned BitWidth = DstTy.getScalarSizeInBits();
   auto CacheEntry = ComputeKnownBitsCache.find(R);
   if (CacheEntry != ComputeKnownBitsCache.end()) {
     Known = CacheEntry->second;
@@ -200,6 +201,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
     TL.computeKnownBitsForTargetInstr(*this, R, Known, DemandedElts, MRI,
                                       Depth);
     break;
+  case TargetOpcode::G_IMPLICIT_DEF:
+    break;
   case TargetOpcode::G_BUILD_VECTOR: {
     // Collect the known bits that are shared by every demanded vector element.
     Known.Zero.setAllBits(); Known.One.setAllBits();
@@ -579,6 +582,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
     break;
   }
   case TargetOpcode::G_SBFX: {
+    // FIXME: the three parameters do not have the same types and bitwidths.
+    break;
     KnownBits SrcOpKnown, OffsetKnown, WidthKnown;
     computeKnownBitsImpl(MI.getOperand(1).getReg(), SrcOpKnown, DemandedElts,
                          Depth + 1);
@@ -586,6 +591,7 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
                          Depth + 1);
     computeKnownBitsImpl(MI.getOperand(3).getReg(), WidthKnown, DemandedElts,
                          Depth + 1);
+
     Known = extractBits(BitWidth, SrcOpKnown, OffsetKnown, WidthKnown);
     // Sign extend the extracted value using shift left and arithmetic shift
     // right.
@@ -627,6 +633,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
   }
   }
 
+  assert(Known.getBitWidth() == BitWidth && "Bit widths must be the same");
+
   LLVM_DEBUG(dumpResult(MI, Known, Depth));
 
   // Update the cache.
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir
index 7566d38e6c6cfa..b9d21890f855a6 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir
@@ -10,9 +10,8 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_merge
     ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
     ; CHECK-NEXT: $w0 = COPY [[DEF]](s32)
-    ; CHECK-NEXT: $w1 = COPY [[DEF1]](s32)
+    ; CHECK-NEXT: $w1 = COPY [[DEF]](s32)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
@@ -30,11 +29,9 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_merge_3ops
     ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF2:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
     ; CHECK-NEXT: $w0 = COPY [[DEF]](s32)
-    ; CHECK-NEXT: $w1 = COPY [[DEF1]](s32)
-    ; CHECK-NEXT: $w2 = COPY [[DEF2]](s32)
+    ; CHECK-NEXT: $w1 = COPY [[DEF]](s32)
+    ; CHECK-NEXT: $w2 = COPY [[DEF]](s32)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %5:_(s32) = G_IMPLICIT_DEF
@@ -115,9 +112,8 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_bitcast_merge
     ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
     ; CHECK-NEXT: $w0 = COPY [[DEF]](s32)
-    ; CHECK-NEXT: $w1 = COPY [[DEF1]](s32)
+    ; CHECK-NEXT: $w1 = COPY [[DEF]](s32)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
@@ -135,14 +131,11 @@ name:            test_combine_unmerge_merge_incompatible_types
 body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_merge_incompatible_types
-    ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[DEF]](s32), [[DEF1]](s32)
-    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s16), [[UV1:%[0-9]+]]:_(s16), [[UV2:%[0-9]+]]:_(s16), [[UV3:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[MV]](s64)
-    ; CHECK-NEXT: $h0 = COPY [[UV]](s16)
-    ; CHECK-NEXT: $h1 = COPY [[UV1]](s16)
-    ; CHECK-NEXT: $h2 = COPY [[UV2]](s16)
-    ; CHECK-NEXT: $h3 = COPY [[UV3]](s16)
+    ; CHECK: [[DEF:%[0-9]+]]:_(s16) = G_IMPLICIT_DEF
+    ; CHECK-NEXT: $h0 = COPY [[DEF]](s16)
+    ; CHECK-NEXT: $h1 = COPY [[DEF]](s16)
+    ; CHECK-NEXT: $h2 = COPY [[DEF]](s16)
+    ; CHECK-NEXT: $h3 = COPY [[DEF]](s16)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
diff --git a/llvm/test/CodeGen/AArch64/bswap.ll b/llvm/test/CodeGen/AArch64/bswap.ll
index 74e4a167ae14ca..f9bf326b61cff8 100644
--- a/llvm/test/CodeGen/AArch64/bswap.ll
+++ b/llvm/test/CodeGen/AArch64/bswap.ll
@@ -56,13 +56,8 @@ define i128 @bswap_i16_to_i128_anyext(i16 %a) {
 ;
 ; CHECK-GI-LABEL: bswap_i16_to_i128_anyext:
 ; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    mov w8, w0
 ; CHECK-GI-NEXT:    mov x0, xzr
-; CHECK-GI-NEXT:    rev w8, w8
-; CHECK-GI-NEXT:    lsr w8, w8, #16
-; CHECK-GI-NEXT:    bfi x8, x8, #32, #32
-; CHECK-GI-NEXT:    and x8, x8, #0xffff
-; CHECK-GI-NEXT:    lsl x1, x8, #48
+; CHECK-GI-NEXT:    mov x1, xzr
 ; CHECK-GI-NEXT:    ret
     %3 = call i16 @llvm.bswap.i16(i16 %a)
     %4 = zext i16 %3 to i128
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll
index 63f5464371cc62..fb2ebc0d5efd2c 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll
@@ -1664,169 +1664,154 @@ define i65 @v_ashr_i65(i65 %value, i65 %amount) {
 ; GFX6-LABEL: v_ashr_i65:
 ; GFX6:       ; %bb.0:
 ; GFX6-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX6-NEXT:    v_bfe_i32 v4, v2, 0, 1
-; GFX6-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
-; GFX6-NEXT:    v_sub_i32_e32 v8, vcc, 64, v3
-; GFX6-NEXT:    v_lshr_b64 v[6:7], v[0:1], v3
-; GFX6-NEXT:    v_lshl_b64 v[8:9], v[4:5], v8
+; GFX6-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX6-NEXT:    v_sub_i32_e32 v6, vcc, 64, v3
+; GFX6-NEXT:    v_lshr_b64 v[4:5], v[0:1], v3
+; GFX6-NEXT:    v_lshl_b64 v[6:7], s[4:5], v6
 ; GFX6-NEXT:    v_subrev_i32_e32 v2, vcc, 64, v3
-; GFX6-NEXT:    v_ashr_i64 v[10:11], v[4:5], v3
-; GFX6-NEXT:    v_or_b32_e32 v6, v6, v8
-; GFX6-NEXT:    v_ashrrev_i32_e32 v8, 31, v5
-; GFX6-NEXT:    v_ashr_i64 v[4:5], v[4:5], v2
-; GFX6-NEXT:    v_or_b32_e32 v7, v7, v9
+; GFX6-NEXT:    v_or_b32_e32 v6, v4, v6
+; GFX6-NEXT:    v_or_b32_e32 v7, v5, v7
+; GFX6-NEXT:    v_ashr_i64 v[4:5], s[4:5], v2
 ; GFX6-NEXT:    v_cmp_gt_u32_e32 vcc, 64, v3
+; GFX6-NEXT:    v_ashr_i64 v[8:9], s[4:5], v3
+; GFX6-NEXT:    s_ashr_i32 s6, s5, 31
 ; GFX6-NEXT:    v_cndmask_b32_e32 v2, v4, v6, vcc
-; GFX6-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX6-NEXT:    v_cmp_eq_u32_e64 s[4:5], 0, v3
+; GFX6-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX6-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s[4:5]
+; GFX6-NEXT:    v_mov_b32_e32 v2, s6
 ; GFX6-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s[4:5]
-; GFX6-NEXT:    v_cndmask_b32_e32 v2, v8, v10, vcc
+; GFX6-NEXT:    v_cndmask_b32_e32 v2, v2, v8, vcc
 ; GFX6-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX8-LABEL: v_ashr_i65:
 ; GFX8:       ; %bb.0:
 ; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX8-NEXT:    v_bfe_i32 v4, v2, 0, 1
-; GFX8-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
-; GFX8-NEXT:    v_sub_u32_e32 v8, vcc, 64, v3
-; GFX8-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
-; GFX8-NEXT:    v_lshlrev_b64 v[8:9], v8, v[4:5]
+; GFX8-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX8-NEXT:    v_sub_u32_e32 v6, vcc, 64, v3
+; GFX8-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
+; GFX8-NEXT:    v_lshlrev_b64 v[6:7], v6, s[4:5]
 ; GFX8-NEXT:    v_subrev_u32_e32 v2, vcc, 64, v3
-; GFX8-NEXT:    v_ashrrev_i64 v[10:11], v3, v[4:5]
-; GFX8-NEXT:    v_or_b32_e32 v6, v6, v8
-; GFX8-NEXT:    v_ashrrev_i32_e32 v8, 31, v5
-; GFX8-NEXT:    v_ashrrev_i64 v[4:5], v2, v[4:5]
-; GFX8-NEXT:    v_or_b32_e32 v7, v7, v9
+; GFX8-NEXT:    v_or_b32_e32 v6, v4, v6
+; GFX8-NEXT:    v_or_b32_e32 v7, v5, v7
+; GFX8-NEXT:    v_ashrrev_i64 v[4:5], v2, s[4:5]
 ; GFX8-NEXT:    v_cmp_gt_u32_e32 vcc, 64, v3
+; GFX8-NEXT:    v_ashrrev_i64 v[8:9], v3, s[4:5]
+; GFX8-NEXT:    s_ashr_i32 s6, s5, 31
 ; GFX8-NEXT:    v_cndmask_b32_e32 v2, v4, v6, vcc
-; GFX8-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX8-NEXT:    v_cmp_eq_u32_e64 s[4:5], 0, v3
+; GFX8-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX8-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s[4:5]
+; GFX8-NEXT:    v_mov_b32_e32 v2, s6
 ; GFX8-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s[4:5]
-; GFX8-NEXT:    v_cndmask_b32_e32 v2, v8, v10, vcc
+; GFX8-NEXT:    v_cndmask_b32_e32 v2, v2, v8, vcc
 ; GFX8-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX9-LABEL: v_ashr_i65:
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX9-NEXT:    v_bfe_i32 v4, v2, 0, 1
-; GFX9-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
-; GFX9-NEXT:    v_sub_u32_e32 v8, 64, v3
-; GFX9-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
-; GFX9-NEXT:    v_lshlrev_b64 v[8:9], v8, v[4:5]
+; GFX9-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX9-NEXT:    v_sub_u32_e32 v6, 64, v3
+; GFX9-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
+; GFX9-NEXT:    v_lshlrev_b64 v[6:7], v6, s[4:5]
 ; GFX9-NEXT:    v_subrev_u32_e32 v2, 64, v3
-; GFX9-NEXT:    v_ashrrev_i64 v[10:11], v3, v[4:5]
-; GFX9-NEXT:    v_or_b32_e32 v6, v6, v8
-; GFX9-NEXT:    v_ashrrev_i32_e32 v8, 31, v5
-; GFX9-NEXT:    v_ashrrev_i64 v[4:5], v2, v[4:5]
-; GFX9-NEXT:    v_or_b32_e32 v7, v7, v9
+; GFX9-NEXT:    v_or_b32_e32 v6, v4, v6
+; GFX9-NEXT:    v_or_b32_e32 v7, v5, v7
+; GFX9-NEXT:    v_ashrrev_i64 v[4:5], v2, s[4:5]
 ; GFX9-NEXT:    v_cmp_gt_u32_e32 vcc, 64, v3
+; GFX9-NEXT:    v_ashrrev_i64 v[8:9], v3, s[4:5]
+; GFX9-NEXT:    s_ashr_i32 s6, s5, 31
 ; GFX9-NEXT:    v_cndmask_b32_e32 v2, v4, v6, vcc
-; GFX9-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX9-NEXT:    v_cmp_eq_u32_e64 s[4:5], 0, v3
+; GFX9-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX9-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s[4:5]
+; GFX9-NEXT:    v_mov_b32_e32 v2, s6
 ; GFX9-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s[4:5]
-; GFX9-NEXT:    v_cndmask_b32_e32 v2, v8, v10, vcc
+; GFX9-NEXT:    v_cndmask_b32_e32 v2, v2, v8, vcc
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: v_ashr_i65:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX10-NEXT:    v_bfe_i32 v4, v2, 0, 1
 ; GFX10-NEXT:    v_sub_nc_u32_e32 v2, 64, v3
-; GFX10-NEXT:    v_subrev_nc_u32_e32 v10, 64, v3
-; GFX10-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
+; GFX10-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX10-NEXT:    v_subrev_nc_u32_e32 v8, 64, v3
+; GFX10-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
 ; GFX10-NEXT:    v_cmp_gt_u32_e32 vcc_lo, 64, v3
-; GFX10-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
+; GFX10-NEXT:    v_lshlrev_b64 v[6:7], v2, s[4:5]
+; GFX10-NEXT:    v_ashrrev_i64 v[8:9], v8, s[4:5]
+; GFX10-NEXT:    v_or_b32_e32 v2, v4, v6
+; GFX10-NEXT:    v_or_b32_e32 v6, v5, v7
+; GFX10-NEXT:    v_ashrrev_i64 v[4:5], v3, s[4:5]
 ; GFX10-NEXT:    v_cmp_eq_u32_e64 s4, 0, v3
-; GFX10-NEXT:    v_lshlrev_b64 v[8:9], v2, v[4:5]
-; GFX10-NEXT:    v_ashrrev_i64 v[10:11], v10, v[4:5]
-; GFX10-NEXT:    v_or_b32_e32 v2, v6, v8
-; GFX10-NEXT:    v_or_b32_e32 v8, v7, v9
-; GFX10-NEXT:    v_ashrrev_i64 v[6:7], v3, v[4:5]
-; GFX10-NEXT:    v_ashrrev_i32_e32 v3, 31, v5
-; GFX10-NEXT:    v_cndmask_b32_e32 v2, v10, v2, vcc_lo
-; GFX10-NEXT:    v_cndmask_b32_e32 v4, v11, v8, vcc_lo
+; GFX10-NEXT:    s_ashr_i32 s5, s5, 31
+; GFX10-NEXT:    v_cndmask_b32_e32 v2, v8, v2, vcc_lo
+; GFX10-NEXT:    v_cndmask_b32_e32 v5, v9, v6, vcc_lo
 ; GFX10-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s4
-; GFX10-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s4
-; GFX10-NEXT:    v_cndmask_b32_e32 v2, v3, v6, vcc_lo
+; GFX10-NEXT:    v_cndmask_b32_e64 v1, v5, v1, s4
+; GFX10-NEXT:    v_cndmask_b32_e32 v2, s5, v4, vcc_lo
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: v_ashr_i65:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-NEXT:    v_bfe_i32 v4, v2, 0, 1
 ; GFX11-NEXT:    v_sub_nc_u32_e32 v2, 64, v3
-; GFX11-NEXT:    v_subrev_nc_u32_e32 v10, 64, v3
-; GFX11-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
+; GFX11-NEXT:    s_bfe_i64 s[0:1], s[0:1], 0x10000
+; GFX11-NEXT:    v_subrev_nc_u32_e32 v8, 64, v3
+; GFX11-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
 ; GFX11-NEXT:    v_cmp_gt_u32_e32 vcc_lo, 64, v3
-; GFX11-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
+; GFX11-NEXT:    v_lshlrev_b64 v[6:7], v2, s[0:1]
+; GFX11-NEXT:    v_ashrrev_i64 v[8:9], v8, s[0:1]
+; GFX11-NEXT:    v_or_b32_e32 v2, v4, v6
+; GFX11-NEXT:    v_or_b32_e32 v6, v5, v7
+; GFX11-NEXT:    v_ashrrev_i64 v[4:5], v3, s[0:1]
 ; GFX11-NEXT:    v_cmp_eq_u32_e64 s0, 0, v3
-; GFX11-NEXT:    v_lshlrev_b64 v[8:9], v2, v[4:5]
-; GFX11-NEXT:    v_ashrrev_i64 v[10:11], v10, v[4:5]
-; GFX11-NEXT:    v_or_b32_e32 v2, v6, v8
-; GFX11-NEXT:    v_or_b32_e32 v8, v7, v9
-; GFX11-NEXT:    v_ashrrev_i64 v[6:7], v3, v[4:5]
-; GFX11-NEXT:    v_ashrrev_i32_e32 v3, 31, v5
-; GFX11-NEXT:    v_cndmask_b32_e32 v2, v10, v2, vcc_lo
-; GFX11-NEXT:    v_cndmask_b32_e32 v4, v11, v8, vcc_lo
+; GFX11-NEXT:    s_ashr_i32 s1, s1, 31
+; GFX11-NEXT:    v_cndmask_b32_e32 v2, v8, v2, vcc_lo
+; GFX11-NEXT:    v_cndmask_b32_e32 v5, v9, v6, vcc_lo
 ; GFX11-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s0
-; GFX11-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s0
-; GFX11-NEXT:    v_cndmask_b32_e32 v2, v3, v6, vcc_lo
+; GFX11-NEXT:    v_cndmask_b32_e64 v1, v5, v1, s0
+; GFX11-NEXT:    v_cndmask_b32_e32 v2, s1, v4, vcc_lo
 ; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %result = ashr i65 %value, %amount
   ret i65 %result
 }
 
 define i65 @v_ashr_i65_33(i65 %value) {
-; GFX6-LABEL: v_ashr_i65_33:
-; GFX6:       ; %bb.0:
-; GFX6-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v3, v1
-; GFX6-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX6-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX6-NEXT:    v_lshl_b64 v[0:1], v[1:2], 31
-; GFX6-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX6-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX6-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX6-NEXT:    s_setpc_b64 s[30:31]
-;
-; GFX8-LABEL: v_ashr_i65_33:
-; GFX8:       ; %bb.0:
-; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX8-NEXT:    v_mov_b32_e32 v3, v1
-; GFX8-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX8-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX8-NEXT:    v_lshlrev_b64 v[0:1], 31, v[1:2]
-; GFX8-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX8-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX8-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX8-NEXT:    s_setpc_b64 s[30:31]
+; GCN-LABEL: v_ashr_i65_33:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GCN-NEXT:    v_lshrrev_b32_e32 v0, 1, v1
+; GCN-NEXT:    s_lshl_b64 s[6:7], s[4:5], 31
+; GCN-NEXT:    s_ashr_i32 s4, s5, 1
+; GCN-NEXT:    v_or_b32_e32 v0, s6, v0
+; GCN-NEXT:    v_mov_b32_e32 v1, s7
+; GCN-NEXT:    v_mov_b32_e32 v2, s4
+; GCN-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX9-LABEL: v_ashr_i65_33:
-; GFX9:       ; %bb.0:
-; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v3, v1
-; GFX9-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX9-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX9-NEXT:    v_lshlrev_b64 v[0:1], 31, v[1:2]
-; GFX9-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX9-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX9-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX9-NEXT:    s_setpc_b64 s[30:31]
+; GFX10-LABEL: v_ashr_i65_33:
+; GFX10:       ; %bb.0:
+; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT:    v_lshrrev_b32_e32 v0, 1, v1
+; GFX10-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX10-NEXT:    s_lshl_b64 s[6:7], s[4:5], 31
+; GFX10-NEXT:    s_ashr_i32 s4, s5, 1
+; GFX10-NEXT:    v_mov_b32_e32 v1, s7
+; GFX10-NEXT:    v_or_b32_e32 v0, s6, v0
+; GFX10-NEXT:    v_mov_b32_e32 v2, s4
+; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX10PLUS-LABEL: v_ashr_i65_33:
-; GFX10PLUS:       ; %bb.0:
-; GFX10PLUS-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX10PLUS-NEXT:    v_mov_b32_e32 v3, v1
-; GFX10PLUS-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX10PLUS-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX10PLUS-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX10PLUS-NEXT:    v_lshlrev_b64 v[0:1], 31, v[1:2]
-; GFX10PLUS-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX10PLUS-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX10PLUS-NEXT:    s_setpc_b64 s[30:31]
+; GFX11-LABEL: v_ashr_i65_33:
+; GFX11:       ; %bb.0:
+; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT:    v_lshrrev_b32_e32 v0, 1, v1
+; GFX11-NEXT:    s_bfe_i64 s[0:1], s[0:1], 0x10000
+; GFX11-NEXT:    s_lshl_b64 s[2:3], s[0:1], 31
+; GFX11-NEXT:    s_ashr_i32 s0, s1, 1
+; GFX11-NEXT:    v_dual_mov_b32 v1, s3 :: v_dual_mov_b32 v2, s0
+; GFX11-NEXT:    v_or_b32_e32 v0, s2, v0
+; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %result = ashr i65 %value, 33
   ret i65 %result
 }
@@ -1834,7 +1819,7 @@ define i65 @v_ashr_i65_33(i65 %value) {
 define amdgpu_ps i65 @s_ashr_i65(i65 inreg %value, i65 inreg %amount) {
 ; GCN-LABEL: s_ashr_i65:
 ; GCN:       ; %bb.0:
-; GCN-NEXT:    s_bfe_i64 s[4:5], s[2:3], 0x10000
+; GCN-NEXT:    s_bfe_i64 s[4:5], s[0:1], 0x10000
 ; GCN-NEXT:    s_sub_i32 s10, s3, 64
 ; GCN-NEXT:    s_sub_i32 s8, 64, s3
 ; GCN-NEXT:    s_cmp_lt_u32 s3, 64
@@ -1857,7 +1842,7 @@ define amdgpu_ps i65 @s_ashr_i65(i65 inreg %value, i65 inreg %amount) {
 ;
 ; GFX10PLUS-LABEL: s_ashr_i65:
 ; GFX10PLUS:       ; %bb.0:
-; GFX10PLUS-NEXT:    s_bfe_i64 s[4:5], s[2:3], 0x10000
+; GFX10PLUS-NEXT:    s_bfe_i64 s[4:5], s[0:1], 0x10000
 ; GFX10PLUS-NEXT:    s_sub_i32 s10, s3, 64
 ; GFX10PLUS-NEXT:    s_sub_i32 s2, 64, s3
 ; GFX10PLUS-NEXT:    s_cmp_lt_u32 s3, 64
@@ -1884,7 +1869,7 @@ define amdgpu_ps i65 @s_ashr_i65(i65 inreg %value, i65 inreg %amount) {
 define amdgpu_ps i65 @s_ashr_i65_33(i65 inreg %value) {
 ; GCN-LABEL: s_ashr_...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Oct 22, 2024

@llvm/pr-subscribers-backend-amdgpu

Author: Thorsten Schütt (tschuett)

Changes

Please continue padding merge values.

ag G_MERGE_VALUES llvm/test/CodeGen/AArch64/GlobalISel

Known bits for G_SBFX are unsound.

  • KnownBits ShiftKnown = KnownBits::sub(ExtKnown, WidthKnown);
  • WidthKnown.getBitWidth() is not guaranteed to be BitWidth.
 getActionDefinitionsBuilder({G_SBFX, G_UBFX})
      .legalFor({{S32, S32}, {S64, S32}})
      .clampScalar(1, S32, S32)
      .clampScalar(0, S32, S64)
      .widenScalarToNextPow2(0)
      .scalarize(0);

Wait for it, known bits in the AMDGPUPostLegalizerCombiner crashed.

Fixes #113067


Patch is 75.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/113381.diff

10 Files Affected:

  • (modified) llvm/include/llvm/Target/GlobalISel/Combine.td (+2-1)
  • (modified) llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp (+5-2)
  • (modified) llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp (+10-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir (+9-16)
  • (modified) llvm/test/CodeGen/AArch64/bswap.ll (+1-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll (+98-113)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/lshr.ll (+71-150)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sext_inreg.ll (+57-95)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/shl.ll (+67-86)
  • (modified) llvm/test/CodeGen/AMDGPU/fptoi.i128.ll (+134-216)
diff --git a/llvm/include/llvm/Target/GlobalISel/Combine.td b/llvm/include/llvm/Target/GlobalISel/Combine.td
index ead4149fc11068..257322f985b530 100644
--- a/llvm/include/llvm/Target/GlobalISel/Combine.td
+++ b/llvm/include/llvm/Target/GlobalISel/Combine.td
@@ -420,7 +420,8 @@ def unary_undef_to_zero: GICombineRule<
 // replaced with undef.
 def propagate_undef_any_op: GICombineRule<
   (defs root:$root),
-  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST, G_ANYEXT):$root,
+  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST,
+                           G_ANYEXT, G_MERGE_VALUES):$root,
          [{ return Helper.matchAnyExplicitUseIsUndef(*${root}); }]),
   (apply [{ Helper.replaceInstWithUndef(*${root}); }])>;
 
diff --git a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
index b7ddf9f479ef8e..397023070aceea 100644
--- a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
@@ -2935,8 +2935,11 @@ void CombinerHelper::replaceInstWithFConstant(MachineInstr &MI,
 
 void CombinerHelper::replaceInstWithUndef(MachineInstr &MI) {
   assert(MI.getNumDefs() == 1 && "Expected only one def?");
-  Builder.buildUndef(MI.getOperand(0));
-  MI.eraseFromParent();
+  if (isLegalOrBeforeLegalizer({TargetOpcode::G_IMPLICIT_DEF,
+                                {MRI.getType(MI.getOperand(0).getReg())}})) {
+    Builder.buildUndef(MI.getOperand(0));
+    MI.eraseFromParent();
+  }
 }
 
 bool CombinerHelper::matchSimplifyAddToSub(
diff --git a/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp b/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
index 2c98b129a1a892..7a8bd4d7912ed6 100644
--- a/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
@@ -159,16 +159,17 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
   }
 #endif
 
+  unsigned BitWidth = DstTy.getScalarSizeInBits();
+
   // Handle the case where this is called on a register that does not have a
   // type constraint (i.e. it has a register class constraint instead). This is
   // unlikely to occur except by looking through copies but it is possible for
   // the initial register being queried to be in this state.
   if (!DstTy.isValid()) {
-    Known = KnownBits();
+    Known = KnownBits(BitWidth); // Don't know anything
     return;
   }
 
-  unsigned BitWidth = DstTy.getScalarSizeInBits();
   auto CacheEntry = ComputeKnownBitsCache.find(R);
   if (CacheEntry != ComputeKnownBitsCache.end()) {
     Known = CacheEntry->second;
@@ -200,6 +201,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
     TL.computeKnownBitsForTargetInstr(*this, R, Known, DemandedElts, MRI,
                                       Depth);
     break;
+  case TargetOpcode::G_IMPLICIT_DEF:
+    break;
   case TargetOpcode::G_BUILD_VECTOR: {
     // Collect the known bits that are shared by every demanded vector element.
     Known.Zero.setAllBits(); Known.One.setAllBits();
@@ -579,6 +582,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
     break;
   }
   case TargetOpcode::G_SBFX: {
+    // FIXME: the three parameters do not have the same types and bitwidths.
+    break;
     KnownBits SrcOpKnown, OffsetKnown, WidthKnown;
     computeKnownBitsImpl(MI.getOperand(1).getReg(), SrcOpKnown, DemandedElts,
                          Depth + 1);
@@ -586,6 +591,7 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
                          Depth + 1);
     computeKnownBitsImpl(MI.getOperand(3).getReg(), WidthKnown, DemandedElts,
                          Depth + 1);
+
     Known = extractBits(BitWidth, SrcOpKnown, OffsetKnown, WidthKnown);
     // Sign extend the extracted value using shift left and arithmetic shift
     // right.
@@ -627,6 +633,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
   }
   }
 
+  assert(Known.getBitWidth() == BitWidth && "Bit widths must be the same");
+
   LLVM_DEBUG(dumpResult(MI, Known, Depth));
 
   // Update the cache.
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir
index 7566d38e6c6cfa..b9d21890f855a6 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-unmerge.mir
@@ -10,9 +10,8 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_merge
     ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
     ; CHECK-NEXT: $w0 = COPY [[DEF]](s32)
-    ; CHECK-NEXT: $w1 = COPY [[DEF1]](s32)
+    ; CHECK-NEXT: $w1 = COPY [[DEF]](s32)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
@@ -30,11 +29,9 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_merge_3ops
     ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF2:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
     ; CHECK-NEXT: $w0 = COPY [[DEF]](s32)
-    ; CHECK-NEXT: $w1 = COPY [[DEF1]](s32)
-    ; CHECK-NEXT: $w2 = COPY [[DEF2]](s32)
+    ; CHECK-NEXT: $w1 = COPY [[DEF]](s32)
+    ; CHECK-NEXT: $w2 = COPY [[DEF]](s32)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %5:_(s32) = G_IMPLICIT_DEF
@@ -115,9 +112,8 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_bitcast_merge
     ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
     ; CHECK-NEXT: $w0 = COPY [[DEF]](s32)
-    ; CHECK-NEXT: $w1 = COPY [[DEF1]](s32)
+    ; CHECK-NEXT: $w1 = COPY [[DEF]](s32)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
@@ -135,14 +131,11 @@ name:            test_combine_unmerge_merge_incompatible_types
 body:             |
   bb.1:
     ; CHECK-LABEL: name: test_combine_unmerge_merge_incompatible_types
-    ; CHECK: [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[DEF1:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[DEF]](s32), [[DEF1]](s32)
-    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s16), [[UV1:%[0-9]+]]:_(s16), [[UV2:%[0-9]+]]:_(s16), [[UV3:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[MV]](s64)
-    ; CHECK-NEXT: $h0 = COPY [[UV]](s16)
-    ; CHECK-NEXT: $h1 = COPY [[UV1]](s16)
-    ; CHECK-NEXT: $h2 = COPY [[UV2]](s16)
-    ; CHECK-NEXT: $h3 = COPY [[UV3]](s16)
+    ; CHECK: [[DEF:%[0-9]+]]:_(s16) = G_IMPLICIT_DEF
+    ; CHECK-NEXT: $h0 = COPY [[DEF]](s16)
+    ; CHECK-NEXT: $h1 = COPY [[DEF]](s16)
+    ; CHECK-NEXT: $h2 = COPY [[DEF]](s16)
+    ; CHECK-NEXT: $h3 = COPY [[DEF]](s16)
     %0:_(s32) = G_IMPLICIT_DEF
     %1:_(s32) = G_IMPLICIT_DEF
     %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
diff --git a/llvm/test/CodeGen/AArch64/bswap.ll b/llvm/test/CodeGen/AArch64/bswap.ll
index 74e4a167ae14ca..f9bf326b61cff8 100644
--- a/llvm/test/CodeGen/AArch64/bswap.ll
+++ b/llvm/test/CodeGen/AArch64/bswap.ll
@@ -56,13 +56,8 @@ define i128 @bswap_i16_to_i128_anyext(i16 %a) {
 ;
 ; CHECK-GI-LABEL: bswap_i16_to_i128_anyext:
 ; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    mov w8, w0
 ; CHECK-GI-NEXT:    mov x0, xzr
-; CHECK-GI-NEXT:    rev w8, w8
-; CHECK-GI-NEXT:    lsr w8, w8, #16
-; CHECK-GI-NEXT:    bfi x8, x8, #32, #32
-; CHECK-GI-NEXT:    and x8, x8, #0xffff
-; CHECK-GI-NEXT:    lsl x1, x8, #48
+; CHECK-GI-NEXT:    mov x1, xzr
 ; CHECK-GI-NEXT:    ret
     %3 = call i16 @llvm.bswap.i16(i16 %a)
     %4 = zext i16 %3 to i128
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll
index 63f5464371cc62..fb2ebc0d5efd2c 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/ashr.ll
@@ -1664,169 +1664,154 @@ define i65 @v_ashr_i65(i65 %value, i65 %amount) {
 ; GFX6-LABEL: v_ashr_i65:
 ; GFX6:       ; %bb.0:
 ; GFX6-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX6-NEXT:    v_bfe_i32 v4, v2, 0, 1
-; GFX6-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
-; GFX6-NEXT:    v_sub_i32_e32 v8, vcc, 64, v3
-; GFX6-NEXT:    v_lshr_b64 v[6:7], v[0:1], v3
-; GFX6-NEXT:    v_lshl_b64 v[8:9], v[4:5], v8
+; GFX6-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX6-NEXT:    v_sub_i32_e32 v6, vcc, 64, v3
+; GFX6-NEXT:    v_lshr_b64 v[4:5], v[0:1], v3
+; GFX6-NEXT:    v_lshl_b64 v[6:7], s[4:5], v6
 ; GFX6-NEXT:    v_subrev_i32_e32 v2, vcc, 64, v3
-; GFX6-NEXT:    v_ashr_i64 v[10:11], v[4:5], v3
-; GFX6-NEXT:    v_or_b32_e32 v6, v6, v8
-; GFX6-NEXT:    v_ashrrev_i32_e32 v8, 31, v5
-; GFX6-NEXT:    v_ashr_i64 v[4:5], v[4:5], v2
-; GFX6-NEXT:    v_or_b32_e32 v7, v7, v9
+; GFX6-NEXT:    v_or_b32_e32 v6, v4, v6
+; GFX6-NEXT:    v_or_b32_e32 v7, v5, v7
+; GFX6-NEXT:    v_ashr_i64 v[4:5], s[4:5], v2
 ; GFX6-NEXT:    v_cmp_gt_u32_e32 vcc, 64, v3
+; GFX6-NEXT:    v_ashr_i64 v[8:9], s[4:5], v3
+; GFX6-NEXT:    s_ashr_i32 s6, s5, 31
 ; GFX6-NEXT:    v_cndmask_b32_e32 v2, v4, v6, vcc
-; GFX6-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX6-NEXT:    v_cmp_eq_u32_e64 s[4:5], 0, v3
+; GFX6-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX6-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s[4:5]
+; GFX6-NEXT:    v_mov_b32_e32 v2, s6
 ; GFX6-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s[4:5]
-; GFX6-NEXT:    v_cndmask_b32_e32 v2, v8, v10, vcc
+; GFX6-NEXT:    v_cndmask_b32_e32 v2, v2, v8, vcc
 ; GFX6-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX8-LABEL: v_ashr_i65:
 ; GFX8:       ; %bb.0:
 ; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX8-NEXT:    v_bfe_i32 v4, v2, 0, 1
-; GFX8-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
-; GFX8-NEXT:    v_sub_u32_e32 v8, vcc, 64, v3
-; GFX8-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
-; GFX8-NEXT:    v_lshlrev_b64 v[8:9], v8, v[4:5]
+; GFX8-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX8-NEXT:    v_sub_u32_e32 v6, vcc, 64, v3
+; GFX8-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
+; GFX8-NEXT:    v_lshlrev_b64 v[6:7], v6, s[4:5]
 ; GFX8-NEXT:    v_subrev_u32_e32 v2, vcc, 64, v3
-; GFX8-NEXT:    v_ashrrev_i64 v[10:11], v3, v[4:5]
-; GFX8-NEXT:    v_or_b32_e32 v6, v6, v8
-; GFX8-NEXT:    v_ashrrev_i32_e32 v8, 31, v5
-; GFX8-NEXT:    v_ashrrev_i64 v[4:5], v2, v[4:5]
-; GFX8-NEXT:    v_or_b32_e32 v7, v7, v9
+; GFX8-NEXT:    v_or_b32_e32 v6, v4, v6
+; GFX8-NEXT:    v_or_b32_e32 v7, v5, v7
+; GFX8-NEXT:    v_ashrrev_i64 v[4:5], v2, s[4:5]
 ; GFX8-NEXT:    v_cmp_gt_u32_e32 vcc, 64, v3
+; GFX8-NEXT:    v_ashrrev_i64 v[8:9], v3, s[4:5]
+; GFX8-NEXT:    s_ashr_i32 s6, s5, 31
 ; GFX8-NEXT:    v_cndmask_b32_e32 v2, v4, v6, vcc
-; GFX8-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX8-NEXT:    v_cmp_eq_u32_e64 s[4:5], 0, v3
+; GFX8-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX8-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s[4:5]
+; GFX8-NEXT:    v_mov_b32_e32 v2, s6
 ; GFX8-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s[4:5]
-; GFX8-NEXT:    v_cndmask_b32_e32 v2, v8, v10, vcc
+; GFX8-NEXT:    v_cndmask_b32_e32 v2, v2, v8, vcc
 ; GFX8-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX9-LABEL: v_ashr_i65:
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX9-NEXT:    v_bfe_i32 v4, v2, 0, 1
-; GFX9-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
-; GFX9-NEXT:    v_sub_u32_e32 v8, 64, v3
-; GFX9-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
-; GFX9-NEXT:    v_lshlrev_b64 v[8:9], v8, v[4:5]
+; GFX9-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX9-NEXT:    v_sub_u32_e32 v6, 64, v3
+; GFX9-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
+; GFX9-NEXT:    v_lshlrev_b64 v[6:7], v6, s[4:5]
 ; GFX9-NEXT:    v_subrev_u32_e32 v2, 64, v3
-; GFX9-NEXT:    v_ashrrev_i64 v[10:11], v3, v[4:5]
-; GFX9-NEXT:    v_or_b32_e32 v6, v6, v8
-; GFX9-NEXT:    v_ashrrev_i32_e32 v8, 31, v5
-; GFX9-NEXT:    v_ashrrev_i64 v[4:5], v2, v[4:5]
-; GFX9-NEXT:    v_or_b32_e32 v7, v7, v9
+; GFX9-NEXT:    v_or_b32_e32 v6, v4, v6
+; GFX9-NEXT:    v_or_b32_e32 v7, v5, v7
+; GFX9-NEXT:    v_ashrrev_i64 v[4:5], v2, s[4:5]
 ; GFX9-NEXT:    v_cmp_gt_u32_e32 vcc, 64, v3
+; GFX9-NEXT:    v_ashrrev_i64 v[8:9], v3, s[4:5]
+; GFX9-NEXT:    s_ashr_i32 s6, s5, 31
 ; GFX9-NEXT:    v_cndmask_b32_e32 v2, v4, v6, vcc
-; GFX9-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX9-NEXT:    v_cmp_eq_u32_e64 s[4:5], 0, v3
+; GFX9-NEXT:    v_cndmask_b32_e32 v4, v5, v7, vcc
 ; GFX9-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s[4:5]
+; GFX9-NEXT:    v_mov_b32_e32 v2, s6
 ; GFX9-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s[4:5]
-; GFX9-NEXT:    v_cndmask_b32_e32 v2, v8, v10, vcc
+; GFX9-NEXT:    v_cndmask_b32_e32 v2, v2, v8, vcc
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: v_ashr_i65:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX10-NEXT:    v_bfe_i32 v4, v2, 0, 1
 ; GFX10-NEXT:    v_sub_nc_u32_e32 v2, 64, v3
-; GFX10-NEXT:    v_subrev_nc_u32_e32 v10, 64, v3
-; GFX10-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
+; GFX10-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX10-NEXT:    v_subrev_nc_u32_e32 v8, 64, v3
+; GFX10-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
 ; GFX10-NEXT:    v_cmp_gt_u32_e32 vcc_lo, 64, v3
-; GFX10-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
+; GFX10-NEXT:    v_lshlrev_b64 v[6:7], v2, s[4:5]
+; GFX10-NEXT:    v_ashrrev_i64 v[8:9], v8, s[4:5]
+; GFX10-NEXT:    v_or_b32_e32 v2, v4, v6
+; GFX10-NEXT:    v_or_b32_e32 v6, v5, v7
+; GFX10-NEXT:    v_ashrrev_i64 v[4:5], v3, s[4:5]
 ; GFX10-NEXT:    v_cmp_eq_u32_e64 s4, 0, v3
-; GFX10-NEXT:    v_lshlrev_b64 v[8:9], v2, v[4:5]
-; GFX10-NEXT:    v_ashrrev_i64 v[10:11], v10, v[4:5]
-; GFX10-NEXT:    v_or_b32_e32 v2, v6, v8
-; GFX10-NEXT:    v_or_b32_e32 v8, v7, v9
-; GFX10-NEXT:    v_ashrrev_i64 v[6:7], v3, v[4:5]
-; GFX10-NEXT:    v_ashrrev_i32_e32 v3, 31, v5
-; GFX10-NEXT:    v_cndmask_b32_e32 v2, v10, v2, vcc_lo
-; GFX10-NEXT:    v_cndmask_b32_e32 v4, v11, v8, vcc_lo
+; GFX10-NEXT:    s_ashr_i32 s5, s5, 31
+; GFX10-NEXT:    v_cndmask_b32_e32 v2, v8, v2, vcc_lo
+; GFX10-NEXT:    v_cndmask_b32_e32 v5, v9, v6, vcc_lo
 ; GFX10-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s4
-; GFX10-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s4
-; GFX10-NEXT:    v_cndmask_b32_e32 v2, v3, v6, vcc_lo
+; GFX10-NEXT:    v_cndmask_b32_e64 v1, v5, v1, s4
+; GFX10-NEXT:    v_cndmask_b32_e32 v2, s5, v4, vcc_lo
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: v_ashr_i65:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-NEXT:    v_bfe_i32 v4, v2, 0, 1
 ; GFX11-NEXT:    v_sub_nc_u32_e32 v2, 64, v3
-; GFX11-NEXT:    v_subrev_nc_u32_e32 v10, 64, v3
-; GFX11-NEXT:    v_lshrrev_b64 v[6:7], v3, v[0:1]
+; GFX11-NEXT:    s_bfe_i64 s[0:1], s[0:1], 0x10000
+; GFX11-NEXT:    v_subrev_nc_u32_e32 v8, 64, v3
+; GFX11-NEXT:    v_lshrrev_b64 v[4:5], v3, v[0:1]
 ; GFX11-NEXT:    v_cmp_gt_u32_e32 vcc_lo, 64, v3
-; GFX11-NEXT:    v_ashrrev_i32_e32 v5, 31, v4
+; GFX11-NEXT:    v_lshlrev_b64 v[6:7], v2, s[0:1]
+; GFX11-NEXT:    v_ashrrev_i64 v[8:9], v8, s[0:1]
+; GFX11-NEXT:    v_or_b32_e32 v2, v4, v6
+; GFX11-NEXT:    v_or_b32_e32 v6, v5, v7
+; GFX11-NEXT:    v_ashrrev_i64 v[4:5], v3, s[0:1]
 ; GFX11-NEXT:    v_cmp_eq_u32_e64 s0, 0, v3
-; GFX11-NEXT:    v_lshlrev_b64 v[8:9], v2, v[4:5]
-; GFX11-NEXT:    v_ashrrev_i64 v[10:11], v10, v[4:5]
-; GFX11-NEXT:    v_or_b32_e32 v2, v6, v8
-; GFX11-NEXT:    v_or_b32_e32 v8, v7, v9
-; GFX11-NEXT:    v_ashrrev_i64 v[6:7], v3, v[4:5]
-; GFX11-NEXT:    v_ashrrev_i32_e32 v3, 31, v5
-; GFX11-NEXT:    v_cndmask_b32_e32 v2, v10, v2, vcc_lo
-; GFX11-NEXT:    v_cndmask_b32_e32 v4, v11, v8, vcc_lo
+; GFX11-NEXT:    s_ashr_i32 s1, s1, 31
+; GFX11-NEXT:    v_cndmask_b32_e32 v2, v8, v2, vcc_lo
+; GFX11-NEXT:    v_cndmask_b32_e32 v5, v9, v6, vcc_lo
 ; GFX11-NEXT:    v_cndmask_b32_e64 v0, v2, v0, s0
-; GFX11-NEXT:    v_cndmask_b32_e64 v1, v4, v1, s0
-; GFX11-NEXT:    v_cndmask_b32_e32 v2, v3, v6, vcc_lo
+; GFX11-NEXT:    v_cndmask_b32_e64 v1, v5, v1, s0
+; GFX11-NEXT:    v_cndmask_b32_e32 v2, s1, v4, vcc_lo
 ; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %result = ashr i65 %value, %amount
   ret i65 %result
 }
 
 define i65 @v_ashr_i65_33(i65 %value) {
-; GFX6-LABEL: v_ashr_i65_33:
-; GFX6:       ; %bb.0:
-; GFX6-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v3, v1
-; GFX6-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX6-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX6-NEXT:    v_lshl_b64 v[0:1], v[1:2], 31
-; GFX6-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX6-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX6-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX6-NEXT:    s_setpc_b64 s[30:31]
-;
-; GFX8-LABEL: v_ashr_i65_33:
-; GFX8:       ; %bb.0:
-; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX8-NEXT:    v_mov_b32_e32 v3, v1
-; GFX8-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX8-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX8-NEXT:    v_lshlrev_b64 v[0:1], 31, v[1:2]
-; GFX8-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX8-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX8-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX8-NEXT:    s_setpc_b64 s[30:31]
+; GCN-LABEL: v_ashr_i65_33:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GCN-NEXT:    v_lshrrev_b32_e32 v0, 1, v1
+; GCN-NEXT:    s_lshl_b64 s[6:7], s[4:5], 31
+; GCN-NEXT:    s_ashr_i32 s4, s5, 1
+; GCN-NEXT:    v_or_b32_e32 v0, s6, v0
+; GCN-NEXT:    v_mov_b32_e32 v1, s7
+; GCN-NEXT:    v_mov_b32_e32 v2, s4
+; GCN-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX9-LABEL: v_ashr_i65_33:
-; GFX9:       ; %bb.0:
-; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v3, v1
-; GFX9-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX9-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX9-NEXT:    v_lshlrev_b64 v[0:1], 31, v[1:2]
-; GFX9-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX9-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX9-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX9-NEXT:    s_setpc_b64 s[30:31]
+; GFX10-LABEL: v_ashr_i65_33:
+; GFX10:       ; %bb.0:
+; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT:    v_lshrrev_b32_e32 v0, 1, v1
+; GFX10-NEXT:    s_bfe_i64 s[4:5], s[4:5], 0x10000
+; GFX10-NEXT:    s_lshl_b64 s[6:7], s[4:5], 31
+; GFX10-NEXT:    s_ashr_i32 s4, s5, 1
+; GFX10-NEXT:    v_mov_b32_e32 v1, s7
+; GFX10-NEXT:    v_or_b32_e32 v0, s6, v0
+; GFX10-NEXT:    v_mov_b32_e32 v2, s4
+; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
-; GFX10PLUS-LABEL: v_ashr_i65_33:
-; GFX10PLUS:       ; %bb.0:
-; GFX10PLUS-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX10PLUS-NEXT:    v_mov_b32_e32 v3, v1
-; GFX10PLUS-NEXT:    v_bfe_i32 v1, v2, 0, 1
-; GFX10PLUS-NEXT:    v_lshrrev_b32_e32 v3, 1, v3
-; GFX10PLUS-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX10PLUS-NEXT:    v_lshlrev_b64 v[0:1], 31, v[1:2]
-; GFX10PLUS-NEXT:    v_ashrrev_i32_e32 v2, 1, v2
-; GFX10PLUS-NEXT:    v_or_b32_e32 v0, v3, v0
-; GFX10PLUS-NEXT:    s_setpc_b64 s[30:31]
+; GFX11-LABEL: v_ashr_i65_33:
+; GFX11:       ; %bb.0:
+; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT:    v_lshrrev_b32_e32 v0, 1, v1
+; GFX11-NEXT:    s_bfe_i64 s[0:1], s[0:1], 0x10000
+; GFX11-NEXT:    s_lshl_b64 s[2:3], s[0:1], 31
+; GFX11-NEXT:    s_ashr_i32 s0, s1, 1
+; GFX11-NEXT:    v_dual_mov_b32 v1, s3 :: v_dual_mov_b32 v2, s0
+; GFX11-NEXT:    v_or_b32_e32 v0, s2, v0
+; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %result = ashr i65 %value, 33
   ret i65 %result
 }
@@ -1834,7 +1819,7 @@ define i65 @v_ashr_i65_33(i65 %value) {
 define amdgpu_ps i65 @s_ashr_i65(i65 inreg %value, i65 inreg %amount) {
 ; GCN-LABEL: s_ashr_i65:
 ; GCN:       ; %bb.0:
-; GCN-NEXT:    s_bfe_i64 s[4:5], s[2:3], 0x10000
+; GCN-NEXT:    s_bfe_i64 s[4:5], s[0:1], 0x10000
 ; GCN-NEXT:    s_sub_i32 s10, s3, 64
 ; GCN-NEXT:    s_sub_i32 s8, 64, s3
 ; GCN-NEXT:    s_cmp_lt_u32 s3, 64
@@ -1857,7 +1842,7 @@ define amdgpu_ps i65 @s_ashr_i65(i65 inreg %value, i65 inreg %amount) {
 ;
 ; GFX10PLUS-LABEL: s_ashr_i65:
 ; GFX10PLUS:       ; %bb.0:
-; GFX10PLUS-NEXT:    s_bfe_i64 s[4:5], s[2:3], 0x10000
+; GFX10PLUS-NEXT:    s_bfe_i64 s[4:5], s[0:1], 0x10000
 ; GFX10PLUS-NEXT:    s_sub_i32 s10, s3, 64
 ; GFX10PLUS-NEXT:    s_sub_i32 s2, 64, s3
 ; GFX10PLUS-NEXT:    s_cmp_lt_u32 s3, 64
@@ -1884,7 +1869,7 @@ define amdgpu_ps i65 @s_ashr_i65(i65 inreg %value, i65 inreg %amount) {
 define amdgpu_ps i65 @s_ashr_i65_33(i65 inreg %value) {
 ; GCN-LABEL: s_ashr_...
[truncated]

(defs root:$root),
(match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST, G_ANYEXT):$root,
(match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST,
G_ANYEXT, G_MERGE_VALUES):$root,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this should be in propagate_undef_all_ops.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming G_MERGE_VALUES (5, undef), I would expect the result to be undef.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be <5,undef> if the output was a vector type. If it is merging scalars then I'm not sure how undef propagation in gisel is expected to work. As far as I understand it is not poison, and this case sounds more like an anyextend.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bswap_i16_to_i128_anyext for example looks like it should end up with something from the input in the top bits, not just zeros.

Copy link
Author

@tschuett tschuett Oct 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// G_MERGE_VALUES should only be used to merge scalars into a larger scalar,

It merges several scalars into a larger scalar. This is the reason why I argue for propagate_undef_any_op.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
; RUN: llc %s -stop-after=legalizer -verify-machineinstrs -mtriple aarch64-apple-darwin -global-isel -o - 2>&1 | FileCheck %s

; The zext here is optimised to an any_extend during isel..
define i128 @bswap_i16_to_i128_anyext(i16 %a) {
  ; CHECK-LABEL: name: bswap_i16_to_i128_anyext
  ; CHECK: bb.1 (%ir-block.0):
  ; CHECK-NEXT:   liveins: $w0
  ; CHECK-NEXT: {{  $}}
  ; CHECK-NEXT:   [[COPY:%[0-9]+]]:_(s32) = COPY $w0
  ; CHECK-NEXT:   [[BSWAP:%[0-9]+]]:_(s32) = G_BSWAP [[COPY]]
  ; CHECK-NEXT:   [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 16
  ; CHECK-NEXT:   [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[BSWAP]], [[C]](s64)
  ; CHECK-NEXT:   [[DEF:%[0-9]+]]:_(s32) = G_IMPLICIT_DEF
  ; CHECK-NEXT:   [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[LSHR]](s32), [[DEF]](s32)
  ; CHECK-NEXT:   [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 65535
  ; CHECK-NEXT:   [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 0
  ; CHECK-NEXT:   [[AND:%[0-9]+]]:_(s64) = G_AND [[MV]], [[C1]]
  ; CHECK-NEXT:   [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 48
  ; CHECK-NEXT:   [[SHL:%[0-9]+]]:_(s64) = G_SHL [[AND]], [[C3]](s64)
  ; CHECK-NEXT:   $x0 = COPY [[C2]](s64)
  ; CHECK-NEXT:   $x1 = COPY [[SHL]](s64)
  ; CHECK-NEXT:   RET_ReallyLR implicit $x0, implicit $x1
   %3 = call i16 @llvm.bswap.i16(i16 %a)
    %4 = zext i16 %3 to i128
    %5 = shl i128 %4, 112
    ret i128 %5
}
  • We have a G_MERGE_VALUES of undef
  • The new undef feeds the G_AND
  • The new undef of the G_AND feeds the G_SHL.
  • The G_SHL of the new undef becomes zero.
def binop_left_undef_to_zero: GICombineRule<
  (defs root:$root),
  (match (wip_match_opcode G_SHL, G_UDIV, G_UREM):$root,
         [{ return Helper.matchOperandIsUndef(*${root}, 1); }]),
  (apply [{ Helper.replaceInstWithConstant(*${root}, 0); }])>;

The result is zero.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now with this optimisation, the function returns 128-bit zero, but if we try to validate the equivalent optimisation just with LLVM IR (i.e., replacing the whole function with ret i128 0, alive2 really complains, and shows an input where the return values don't actually match according to the IR: https://alive2.llvm.org/ce/z/8gdDkw

I believe @regehr also has a tool to do LLVM IR to AArch64 Assembly translation validation, which I believe would also agree that this optimisation, as implemented, is wrong.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Independent of whether the optimization is valid, the GMIR is legal for AArch64. The bswap is on 32-bit and not on 16-bit. There are operations in the GMIR that are not in LLVM-IR. Another question is why and where are we creating G_MERGE_VALUES of undef.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you try the ag command in the heading, the pattern occurs several times in existing tests.

The other question is how to interpret an G_MERGE_VALUES of undef.

 %0:(s32) = G_MERGE_VALUES %bits_0_7:(s8), %bits_8_15:(s8),  %bits_16_23:(s8), %bits_24_31:(s8)

What does it mean if %bits_24_31 is undef? Is a subrange of the output invalid or is the complete output invalid? What happens if the output is used by other operations? In the exampled MV is used by the G_AND. Is a subrange of MV invalid?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no problems with how this has been legalized.

This patch proposes an optimisation that relies on specific semantics of G_MERGE_VALUES and undef - that if any inputs are undef, then the output must be undef. This optimisation produces miscompiles, as the alive2 link shows - so the optimisation must not be correct.

I think that David is right, and the output of G_MERGE_VALUES should only be undef if all inputs are undef (not if any are undef). This reasoning makes sense to me because if you are assembling a wide value by concatenating some defined bits and some undefined bits, then that wide value must still have some defined bits (not be all undefined bits). If you are assembling a wide value by concatenating only undefined bits, then it stands to reason that all the outputs are undefined.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BitWidth == RHS.BitWidth && "Bit widths must be the same"

4 participants