Hello, and thanks for the great work on OmniGen2!
I am trying OmniGen2 example_t2i.sh with custom prompt input. However, I found that when the prompt is too long (e.g. 2k length), it would trigger CUDA error: device-side assert triggered at this line.
This issue appears to be related to a maximum token limit for the input prompts. Could you please provide some guidance on this? Is there a recommended method to handle longer prompts, or is this an inherent limitation of the current model architecture?
Thanks a lot!