Thanks for the great work on this project! I've been digging into the code and I noticed that the model (CDiT) utilizes the pre-trained stabilityai/sd-vae-ft-ema for encoding images into the latent space.
Based on my understanding: The sd-vae-ft-ema was primarily trained on standard perspective images (natural images). Do you think the domain gap between the VAE's training data and fisheye images would significantly degrade the generation quality of future frames? Would you recommend fine-tuning the VAE on fisheye dataset first?
Any insights or suggestions would be greatly appreciated!
Thanks!