Hi,
I'm trying to implement the invertible network described in the paper in Tensorflow 2.
I am having some difficulties matching the descriptions of the loss functions with the code.
Especially, I think there might be an inconsistency in this file:
If I've understood correctly, the function loss_reconstruction (that is almost undescribed in the paper) seems to use the following layout for the values that are fed to the sampling process:

However, the train_epoch function seems to use a different layout:

Is this a mistake, or does the output of the forward process really have a different format than the input of the inverse process?