Skip to content

Commit fa3b05d

Browse files
yucai-intelamathewc
authored andcommitted
[Intel GPU] Allow XPU backend in Quantize operators (pytorch#150288)
This modification is to support torch.quantize_per_channel() on XPU, otherwise it will cause a segmentation fault. Pull Request resolved: pytorch#150288 Approved by: https://github.com/jerryzh168, https://github.com/guangyey
1 parent 1437cab commit fa3b05d

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

aten/src/ATen/native/quantized/AffineQuantizer.cpp

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,7 @@ Tensor& quantize_tensor_per_channel_affine(
151151
AT_DISPATCH_QINT_TYPES(qtensor.scalar_type(), fn_name, [&]() {
152152
checkQuantizedTensor<scalar_t>(fn_name, qtensor);
153153
if (qtensor.device().type() != c10::DeviceType::CUDA &&
154+
qtensor.device().type() != c10::DeviceType::XPU &&
154155
qtensor.device().type() != c10::DeviceType::PrivateUse1) {
155156
checkZeroPoints<underlying_t>(fn_name, zero_points);
156157
} // for cuda and privateuse1, this check will occur in the actual device function
@@ -242,6 +243,7 @@ Tensor& dequantize_tensor_per_channel_affine(
242243
AT_DISPATCH_QINT_TYPES(qtensor.scalar_type(), fn_name, [&]() {
243244
checkQuantizedTensor<scalar_t>(fn_name, qtensor);
244245
if(qtensor.device().type() != c10::DeviceType::CUDA &&
246+
qtensor.device().type() != c10::DeviceType::XPU &&
245247
qtensor.device().type() != c10::DeviceType::PrivateUse1){
246248
checkZeroPoints<underlying_t>(fn_name, zero_points);
247249
} // for cuda and privateuse1, this check will occur in the actual device function

0 commit comments

Comments
 (0)