Skip to content

Regarding Topological Attribute Gradient Setting #1

@alex200420

Description

@alex200420

Hi! Huge pleasure! Currently working on replicating the logic behind this paper, and we were wondering about the gradient setting behind the topological attribute module, on the get_attrb function implemented. Before calling the autograd.grad, the grad_outputs is set to 1 masking the soft labels, and we couldn't quite grasp the reason behind this. I was wondering if you might be able to explain this part or maybe reference us to another paper to understand this better?

Thanks in advance!

# set the gradients of the corresponding output activations to one
output_grad = torch.zeros_like(output)
output_grad[labels] = 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions