Replies: 1 comment 4 replies
-
I dont know much about how TI or HN works, but maybe this could be interesting for you, I think is related to what you asked: |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've noticed the loss function takes two 64,64,4 vectors. I guess those are the latent vectors produced by the autoencoder. To generate an actual image, a 64,64,4 vector is given to the autoencoder (decoder part) to produce the final 512,512,3 vector.
Now in context of hypernetworks and the task of face training, would it be beneficial to directly apply a loss between input image from training and output image from network? While that would require to do inference at training time, certain loss functions like cosine loss could be applied.
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions