Skip to content

Commit 1f8834a

Browse files
authored
Fix the nan bug when passing all zero values into clip_by_norm_op. (#30777) (#32038)
if all input grads are zero, the output of clip_by_norm will be inf or nan. This pr is used to fix this bug.
1 parent b02c818 commit 1f8834a

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

paddle/fluid/operators/clip_by_norm_op.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,12 @@ class ClipByNormKernel : public framework::OpKernel<T> {
8181
*context.template device_context<DeviceContext>().eigen_device();
8282

8383
auto temp = (x_norm <= max_norm).template cast<T>();
84-
auto scaling = temp + (static_cast<T>(1) - temp) * max_norm / x_norm;
84+
auto epsilon =
85+
((x_norm <= static_cast<T>(1e-30)).all().template cast<T>()) *
86+
static_cast<T>(1e-6);
87+
88+
auto scaling =
89+
temp + (static_cast<T>(1) - temp) * max_norm / (x_norm + epsilon);
8590
Eigen::array<int, 1> one_dim{{1}};
8691
Eigen::DSizes<int, 1> m_dsize(input->numel());
8792
if (context.GetPlace() == platform::CPUPlace()) {

0 commit comments

Comments
 (0)