-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Open
Description
Hello community,
The default design of the variational autoencoder in PyOD consists of a ReLU activation at the end of the decoder network. The use of ReLU to construct the final reconstruction output has some critical implications, as the objective of the AE is to reconstruct the inputs. These inputs are not exclusively constricted to the domain of ReLU-transformed values [0,inf>, with the result that the model may never truly low reconstruction error.
Why was this design choice made, and is it one that needs to be reconsidered in the PyOD repository?
Thanks!
Ivan
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels