[XPU] Fix precision for paddle.Tensor.__sub__ complex64/complex128#78942
Open
YqGe585 wants to merge 1 commit into
Open
[XPU] Fix precision for paddle.Tensor.__sub__ complex64/complex128#78942YqGe585 wants to merge 1 commit into
YqGe585 wants to merge 1 commit into
Conversation
…s — add cast and subtract support for complex types Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
你的PR提交成功,感谢你对开源项目的贡献! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR Category
Operator Mechanism
PR Types
Bug fixes
Description
The XPU kernel for
paddle.Tensor.__sub__failed with PaddleError when operating on complex64/complex128 types (e.g.complex64 - float64orfloat64 - complex64). The root cause was two missing capabilities:XPU cast kernel lacked
complex128support: TheCastKernelon XPU only handledDataType::COMPLEX64(underPADDLE_WITH_XPU_FFT) but notDataType::COMPLEX128. When type promotion yieldedcomplex128fromcomplex64 + float64, the cast kernel threw "Not supported cast float64 -> complex128".XPU subtract kernel had no complex type support: The
SubtractKernelandSubtractGradKernelregistrations on XPU only includedfloat, float16, bfloat16, int, int64_t— nocomplex64orcomplex128. The xdnn library also lacksbroadcast_sub<double>andbroadcast_sub_grad<double>, socomplex128(whose real/imag parts aredouble) requires a float-cast workaround.Fix for cast kernel
Added
DataType::COMPLEX128case in the switch statement (underPADDLE_WITH_XPU_FFT), following the same Real/Imag decomposition pattern used forCOMPLEX64. AddedCastKernel<phi::complex128, XPUContext>specialization and registeredphi::complex128in the kernel registration.Fix for subtract kernel
Added
SubtractKernel<phi::complex64>andSubtractKernel<phi::complex128>specializations underPADDLE_WITH_XPU_FFT, using Real/Imag decomposition. Forcomplex128, since xdnn lacksbroadcast_sub<double>, the fix casts real/imag double parts to float, performs subtraction in float, then casts back to double and recombines viaComplexKernel<double>. The same float-cast workaround applies toSubtractGradKernel<phi::complex128>. Registered bothphi::complex64andphi::complex128in forward and grad kernel registrations.Modified files
paddle/phi/kernels/xpu/cast_kernel.cc: Added COMPLEX128 case, complex128 specialization, and registrationpaddle/phi/kernels/xpu/elementwise_subtract_kernel.cc: Added complex64/complex128 forward kernel specializations and registrationpaddle/phi/kernels/xpu/elementwise_subtract_grad_kernel.cc: Added complex64/complex128 grad kernel specializations and registrationDoes this PR introduce a precision change?
Yes — XPU precision corrected to align with GPU for complex subtraction operations. Previously these cases threw kernel errors; now they produce correct results matching GPU output.