Skip to content

Commit 5e18bc3

Browse files
Tiwari-Avanishpytorchmergebot
authored andcommitted
[PowerPC] Fixed build issue for vsx vec256 complexfloat and scaled_mm_out_cpu (pytorch#155255)
Pytorch build is failing on power system from this commit ec24f8f ***Build Failure Logs*** **Error related to mkldnn** ``` pytorch/aten/src/ATen/native/Blas.cpp:302:26: error: ‘cpuinfo_has_x86_amx_int8’ was not declared in this scope 302 | if ((!mixed_dtype && cpuinfo_has_x86_amx_int8()) || | ^~~~~~~~~~~~~~~~~~~~~~~~ pytorch/aten/src/ATen/native/Blas.cpp:303:25: error: ‘cpuinfo_has_x86_amx_fp16’ was not declared in this scope 303 | (mixed_dtype && cpuinfo_has_x86_amx_fp16())) { | ^~~~~~~~~~~~~~~~~~~~~~~~ ``` **Error related to vec256 complex float redefinition** ``` aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:19:7: error: specialization of ‘at::vec::DEFAULT::Vectorized<c10::complex<float> >’ after instantiation 19 | class Vectorized<ComplexFlt> { | ^~~~~~~~~~~~~~~~~~~~~~ aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:19:7: error: redefinition of ‘class at::vec::DEFAULT::Vectorized<c10::complex<float> >’
 aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:633:18: error: ‘const class at::vec::DEFAULT::Vectorized<c10::complex<float> >’ has no member named ‘abs_2_’ 633 | auto abs_a = a.abs_2_(); | ^~~~~~ aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:634:18: error: ‘const class at::vec::DEFAULT::Vectorized<c10::complex<float> >’ has no member named ‘abs_2_’ 634 | auto abs_b = b.abs_2_(); | ^~~~~~ /aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:666:17: error: ‘const class at::vec::DEFAULT::Vectorized<c10::complex<float> >’ has no member named ‘vec0’ 666 | vec_add(a.vec0(), b.vec0()), vec_add(a.vec1(), b.vec1())}; aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:673:17: error: ‘const class at::vec::DEFAULT::Vectorized<c10::complex<float> >’ has no member named ‘vec0’ 673 | vec_sub(a.vec0(), b.vec0()), vec_sub(a.vec1(), b.vec1())}; | ^~~~ aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_float_vsx.h:680:27: error: ‘const class at::vec::DEFAULT::Vectorized<c10::complex<float> >’ has no member named ‘vec0’ 680 | vec_and(a.vec0(), b.vec0()), vec_and(a.vec1(), b.vec1())}; ``` ***With this changes build logs*** ``` Building wheel torch-2.8.0a0+gita3098a7 -- Building version 2.8.0a0+gita3098a7 -- Checkout nccl release tag: v2.26.5-1 cmake -GNinja -DBLAS=OpenBLAS -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/avanish/OfficeWork2025/JuneWork/pytorch_5Jun/pack/torch_night_5Jun/pytorch/torch -DCMAKE_PREFIX_PATH=/home/avanish/OfficeWork2025/JuneWork/pyenv/pytorch_5Jun/lib/python3.12/site-packages -DPython_EXECUTABLE=/home/avanish/OfficeWork2025/JuneWork/pyenv/pytorch_5Jun/bin/python -DTORCH_BUILD_VERSION=2.8.0a0+gita3098a7 -DUSE_MKLDNN=ON -DUSE_MKLDNN_CBLAS=ON -DUSE_NUMPY=True -DUSE_OPENMP=ON /home/avanish/OfficeWork2025/JuneWork/pytorch_5Jun/pack/torch_night_5Jun/pytorch cmake --build . --target install --config Release running build_ext -- Building with NumPy bindings -- Not using cuDNN -- Not using CUDA -- Not using XPU -- Using MKLDNN -- Not using Compute Library for the Arm architecture with MKLDNN -- Using CBLAS in MKLDNN -- Not using NCCL -- Building with distributed package: -- USE_TENSORPIPE=True -- USE_GLOO=True -- USE_MPI=False -- Building Executorch -- Not using ITT Copying functorch._C from functorch/functorch.so to /home/avanish/OfficeWork2025/JuneWork/pytorch_5Jun/pack/torch_night_5Jun/pytorch/build/lib.linux-ppc64le-cpython-312/functorch/_C.cpython-312-powerpc64le-linux-gnu.so copying functorch/functorch.so -> /home/avanish/OfficeWork2025/JuneWork/pytorch_5Jun/pack/torch_night_5Jun/pytorch/build/lib.linux-ppc64le-cpython-312/functorch/_C.cpython-312-powerpc64le-linux-gnu.so building 'torch._C' extension creating build/temp.linux-ppc64le-cpython-312/torch/csrc ``` This patch will fix the pytorch build issue on power, and i am able to build successfully. Hi @malfet @albanD Please review this PR for pytorch build issue that we are observing on power. Pull Request resolved: pytorch#155255 Approved by: https://github.com/albanD, https://github.com/malfet
1 parent 2815eea commit 5e18bc3

File tree

3 files changed

+8
-4
lines changed

3 files changed

+8
-4
lines changed

aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_double_vsx.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -478,7 +478,7 @@ class Vectorized<ComplexDbl> {
478478
this->store(tmp1);
479479
b.store(tmp2);
480480

481-
for (const auto i : c10::irange(Vectorized<c10::complex<float>>::size())) {
481+
for (const auto i : c10::irange(Vectorized<c10::complex<double>>::size())) {
482482
out[i] = tmp1[i] / tmp2[i];
483483
}
484484
return loadu(out);

aten/src/ATen/native/Blas.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -296,7 +296,7 @@ _scaled_mm_out_cpu(const Tensor& mat1, const Tensor& mat2,
296296
std::optional<c10::ScalarType> out_dtype,
297297
bool use_fast_accum,
298298
Tensor& out) {
299-
#if AT_MKLDNN_ENABLED()
299+
#if AT_MKLDNN_ENABLED() && !defined(__powerpc__)
300300
if (at::globalContext().userEnabledMkldnn()) {
301301
bool mixed_dtype = mat1.scalar_type() != mat2.scalar_type();
302302
if ((!mixed_dtype && cpuinfo_has_x86_amx_int8()) ||

cmake/Modules/FindMKLDNN.cmake

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,8 +85,12 @@ IF(NOT MKLDNN_FOUND)
8585
ENDIF(NOT APPLE AND NOT WIN32 AND NOT BUILD_LITE_INTERPRETER)
8686

8787
IF(EXISTS "${MKLDNN_ROOT}/include/oneapi/dnnl/dnnl_ukernel.hpp")
88-
MESSAGE("-- Will build oneDNN UKERNEL")
89-
SET(DNNL_EXPERIMENTAL_UKERNEL ON CACHE BOOL "" FORCE)
88+
IF(CPU_POWER)
89+
SET(DNNL_EXPERIMENTAL_UKERNEL OFF CACHE BOOL "" FORCE)
90+
ELSE()
91+
MESSAGE("-- Will build oneDNN UKERNEL")
92+
SET(DNNL_EXPERIMENTAL_UKERNEL ON CACHE BOOL "" FORCE)
93+
ENDIF()
9094
ENDIF(EXISTS "${MKLDNN_ROOT}/include/oneapi/dnnl/dnnl_ukernel.hpp")
9195

9296
FIND_PACKAGE(BLAS)

0 commit comments

Comments
 (0)