Skip to content

Commit 71f226c

Browse files
committed
fix: skip when gradient is sparse
1 parent 5af762d commit 71f226c

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

pytorch_optimizer/optimizer/experimental/ranger25.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ def schedule_beta3(t_alpha_beta3: Optional[float], step: int, beta1: float, beta
103103
@torch.no_grad()
104104
def orthogonalize_gradients(self, params, eps: float = 1e-16) -> None:
105105
for p in params:
106-
if p.grad is None:
106+
if p.grad is None or p.grad.is_sparse:
107107
continue
108108

109109
w = p.view(-1)

pytorch_optimizer/optimizer/orthograd.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ def reset(self):
5252
@torch.no_grad()
5353
def orthogonalize_gradients(self, params) -> None:
5454
for p in params:
55-
if p.grad is None:
55+
if p.grad is None or p.grad.is_sparse:
5656
continue
5757

5858
w = p.view(-1)

0 commit comments

Comments
 (0)