Hello!
I am particularly interested in being able to generate images with dual guidance, ie: providing both an image prompt and a text prompt as guidance. The prior model is able to accept both as input and produces images that are clearly guided by both successfully. My question is about how to control and modulate this guidance. I would like a parameter to control the relative strength of both of these guidance systems. Does this exist in any capacity in the current implementation?
If not, how should I go about adding it?