Skip to content

Bug: Adapter lacks evaluation mode for stateful transforms #233

@vpratz

Description

@vpratz

The .standardize() transform currently breaks net.sample in the starter notebook, as it tries to update its parameters and the conditions have a variance of zero.

More generally, it is undesirable for the transforms to change after training, as this would lead to changing results with repeated evaluations.

What would be the best design to implement this? Optimally, the transform would only change when stage="training", similar to batch norm and other stateful layers. @LarsKue, do you already have any thoughts/plans for this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    Done

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions