Commit c414f75
[WOQ][Inductor] Enable CUDA coverage for _weight_int8pack_mm (pytorch#163461)
Summary:
What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py as the kernel was added by pytorch#159325.
Why: Confirm CUDA backend for _weight_int8pack_mm is registered.
Test Plan:
```
buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda
```
https://www.internalfb.com/intern/testinfra/testrun/2533275104869494
Differential Revision: D82926440
Pull Request resolved: pytorch#163461
Approved by: https://github.com/jerryzh1681 parent 768361e commit c414f75
1 file changed
+0
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2453 | 2453 | | |
2454 | 2454 | | |
2455 | 2455 | | |
2456 | | - | |
2457 | 2456 | | |
2458 | 2457 | | |
2459 | 2458 | | |
| |||
0 commit comments