Skip to content

Commit a4bb8e0

Browse files
authored
Drop manual bf16 handling (currently just in LLVMGPU) (#20313)
Get rid of the manual bfloat expansion pass because LLVM has a bfloat16 type and the AMDGU backend supports it. The pass remains in the LLVMCPU pipeline (since not having it causes linker errors that aren't worth fixing) and in the SPIR-V backend, where there's no official bf16 type. This PR didn't appear to cause compilation failures on CUDA
1 parent 17f2915 commit a4bb8e0

File tree

1 file changed

+0
-3
lines changed
  • compiler/src/iree/compiler/Codegen/LLVMGPU

1 file changed

+0
-3
lines changed

compiler/src/iree/compiler/Codegen/LLVMGPU/Passes.cpp

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1107,9 +1107,6 @@ static void addLowerToLLVMGPUPasses(OpPassManager &modulePassManager,
11071107
.addPass(createCSEPass)
11081108
// Handle complex operation conversion.
11091109
.addPass(createConvertComplexToStandardPass)
1110-
// Convert BF16 operations to occur as F32.
1111-
.addPass(createConvertBf16ArithToF32Pass)
1112-
.addPass(createConvertBf16ToUInt16BuffersPass)
11131110
// Math dialect ops rewrites, approximations, casts.
11141111
.addPass(createMathTransformPass)
11151112
.addPass(memref::createExpandOpsPass)

0 commit comments

Comments
 (0)