Skip to content

Commit 39dd847

Browse files
authored
for intel xpu case, use MatMul8bitFp even not use ipex (#1728)
* for intel xpu case, use MatMul8bitFp even not use ipex Signed-off-by: Liu, Kaixuan <[email protected]> * fix lint issue Signed-off-by: Liu, Kaixuan <[email protected]> --------- Signed-off-by: Liu, Kaixuan <[email protected]>
1 parent a09d05a commit 39dd847

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

bitsandbytes/autograd/_functions.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
from typing_extensions import deprecated
99

1010
import bitsandbytes.functional as F
11-
from bitsandbytes.functional import ipex_cpu, ipex_xpu
11+
from bitsandbytes.functional import ipex_cpu
1212

1313
# The inverse transformation for the colTuring and colAmpere format were contributed by Alex Borzunov:
1414
# https://github.com/bigscience-workshop/petals/blob/main/src/petals/utils/linear8bitlt_patch.py
@@ -426,7 +426,7 @@ def matmul(
426426
state.threshold = threshold
427427
# MatMul8bitLt is slower because no fast kernel for quant/dequant 8bit in CPU/XPU
428428
if state.is_training:
429-
if (A.device.type == "cpu" and ipex_cpu) or (A.device.type == "xpu" and ipex_xpu):
429+
if (A.device.type == "cpu" and ipex_cpu) or (A.device.type == "xpu"):
430430
return MatMul8bitFp.apply(A, B, out, bias, state)
431431
return MatMul8bitLt.apply(A, B, out, bias, state)
432432

0 commit comments

Comments
 (0)