Hi @bubbliiiing . I wonder there is a function of training validation like generating img/vid using the latest/currently trained weight.
I'm running z-image-fun controlnet 2.1 but it seems it doesn't generate any intermediate img/vid during training even in tensorboard like every N iteration.
1)is it true? if so how to do it? do I need to implement it to offload the currently trained latest model and run inference using the latest model(without training parameters) and load the trained model again?
-
It by default saves weights every 50 steps. why so oftenly? is it post training? others like diffusers/diffsysth studio usually save models by every 1000 or 5000 steps. do I need to keep it 50?
-
at 0 step, it generates "sanityi-check/cinematic.gif" and it looks like, it already generated quite nice results. is it a GT image read from a disk? or it is generated image?
thank you very much!