Skip to content
This repository was archived by the owner on Feb 6, 2020. It is now read-only.

NaN in Sparse Training #95

@william-silversmith

Description

@william-silversmith

When training on labels that may not have any positive classes in the field of view, normalization causes an NaN to propagate through the network. Would it be possible to add an epsilon to the denominator to prevent this gift that keeps on giving?

Sachin and Nick get around this (amongst other reasons) by limiting their samples to those containing a positive class.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions