Skip to content

Commit c414f75

Browse files
bbeckcapytorchmergebot
authored andcommitted
[WOQ][Inductor] Enable CUDA coverage for _weight_int8pack_mm (pytorch#163461)
Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py as the kernel was added by pytorch#159325. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Differential Revision: D82926440 Pull Request resolved: pytorch#163461 Approved by: https://github.com/jerryzh168
1 parent 768361e commit c414f75

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

test/inductor/test_torchinductor.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2453,7 +2453,6 @@ def fn(a):
24532453
self.common(fn, [packed])
24542454

24552455
@xfail_if_mps_unimplemented
2456-
@skipCUDAIf(True, "No _weight_int8pack_mm implementation on CUDA")
24572456
@skipIfXpu(msg="No _weight_int8pack_mm implementation on XPU")
24582457
def test_int8_weight_only_quant(self):
24592458
def convert_weight_to_int8pack(b):

0 commit comments

Comments
 (0)