The training of the single encoder and 2 decoders happens like in the following simplified code.
self.encoder = self.Encoder()
self.decoder_A = self.Decoder()
self.decoder_B = self.Decoder()
...
self.autoencoder_A = KerasModel(x, self.decoder_A(self.encoder(x)))
self.autoencoder_B = KerasModel(x, self.decoder_B(self.encoder(x)))
for i in epochs:
self.autoencoder_A.train_one_batch(...)
self.autoencoder_B.train_one_batch(...) # doesn't this reset the encoder weights
My understanding when training one autoencoder_A it does not change the weights of autoencoder_B's decoder but changes the weights of the encoder since it is shared. Please correct me if i am wrong.
How does the loss gets minimized if one autoencoder changes the weight of shared encoder another alternatively ?