You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the addition of Context Options allowing for longer t2v and i2v video generations, It would be nice to be able to prompt for different segments of the video, even if it isn't exact. I know there are ways to extract last frames, continue from where you left off, or through a series of spaghetti layouts, it could probably be done.
I prefer using native nodes to more experimental WanVideoWrapper -- although I appreciate the experimentation. I also realize context options are in beta at the moment. Kijai has added to his clip text encoder the option to use | pipe symbol to use multiple prompts for a single generation. I hope native ComfyUI nodes can implement a similar feature in the future as it would help to make use of longer video generations. At the present, Most of my experiments with context options have resulted in longer videos that don't deviate but ever so slightly throughout the generation.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
With the addition of Context Options allowing for longer t2v and i2v video generations, It would be nice to be able to prompt for different segments of the video, even if it isn't exact. I know there are ways to extract last frames, continue from where you left off, or through a series of spaghetti layouts, it could probably be done.
I prefer using native nodes to more experimental WanVideoWrapper -- although I appreciate the experimentation. I also realize context options are in beta at the moment. Kijai has added to his clip text encoder the option to use | pipe symbol to use multiple prompts for a single generation. I hope native ComfyUI nodes can implement a similar feature in the future as it would help to make use of longer video generations. At the present, Most of my experiments with context options have resulted in longer videos that don't deviate but ever so slightly throughout the generation.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions