Skip to content

Commit 5efeb0f

Browse files
cthifacebook-github-bot
authored andcommitted
Remove CK BF16 gemm (#4851)
Summary: Pull Request resolved: #4851 X-link: facebookresearch/FBGEMM#1877 This kernel [isn't used anywhere](https://www.internalfb.com/code/search?q=repo%3Afbcode%20torch.ops.fbgemm.bf16_gemm), and since we rely on rocBLAS for BF16, we probably don't need to keep this one now. It would be one less kernel to migrate over, and we plan to move the current `quantize` namespace to `gemm`. Reviewed By: jwfromm Differential Revision: D82114437 fbshipit-source-id: 7ec70a529613b2d69e278a6441ba66d9f57087de
1 parent 60fc073 commit 5efeb0f

File tree

4 files changed

+0
-446
lines changed

4 files changed

+0
-446
lines changed

fbgemm_gpu/experimental/gen_ai/bench/ck_bf16_bench.py

Lines changed: 0 additions & 168 deletions
This file was deleted.

fbgemm_gpu/experimental/gen_ai/gen_ai/__init__.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,9 +37,6 @@
3737
torch.ops.load_library(
3838
"//deeplearning/fbgemm/fbgemm_gpu/experimental/gen_ai:comm_ops"
3939
)
40-
torch.ops.load_library(
41-
"//deeplearning/fbgemm/fbgemm_gpu/experimental/gen_ai:gemm_ops"
42-
)
4340
torch.ops.load_library(
4441
"//deeplearning/fbgemm/fbgemm_gpu/experimental/gen_ai:quantize_ops"
4542
)

0 commit comments

Comments
 (0)