-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Hi! Huge pleasure! Currently working on replicating the logic behind this paper, and we were wondering about the gradient setting behind the topological attribute module, on the get_attrb function implemented. Before calling the autograd.grad, the grad_outputs is set to 1 masking the soft labels, and we couldn't quite grasp the reason behind this. I was wondering if you might be able to explain this part or maybe reference us to another paper to understand this better?
Thanks in advance!
AmalgamateGNN.PyTorch/topological_attrib_s.py
Lines 38 to 40 in f99a60b
| # set the gradients of the corresponding output activations to one | |
| output_grad = torch.zeros_like(output) | |
| output_grad[labels] = 1 |
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels