You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Got a question for you experts. Now I've been messing around and I know my way around but I find that there is a problem with scale and I want to ask what the best solutions are today for dealing with this issue of reconciling two types of workflow because the workflows feed into each other. So far though it seems like we have to manually manage chosen images and import them across separate workflows.
So in the course of generating images:
First, we want to use a high speed (turbo & LCM SDXL, or 1.5 with LCM Lora) workflow to generate many images at multiple images per second.
Then, I would like to have some sort of a picker node where I can click in the gallery of outputs any that I like, that I'll spend more compute feeding through: detailer flows, or maybe send through a non-LCM checkpoint img2img flow, then optionally detailers again, optionally upscalers, etc.
Basically we have separate workflows going on, there's the initial composition bashing step (mode A: rapid generation) where we can generate hundreds of outputs per minute, in large batches.
Then there is detailed and could be extremely complex flows to work off of the compositions we found that we like as starting points. These (mode B: detailed rendering) flows take much longer to run.
The conflict is that I have not seen an effective method to switch a comfy workflow from mode A into mode B.
We have to toggle seed increment/randomize off to fixed in order to safely click Generate without blasting out more generations at the front of the chain and invalidating everything else. This takes about 10s as we have to hunt down that node and flip a setting there.
We have to drag around noodles based on a plan that we come up with for how we want to enhance the image we chose. For example say we got one LCM gen we like out of this most recent batch of 16, I dunno, I assume a custom node exists where we can view all 16 and click within it to pick a specific desired image, and feed it downstream. OK, then downstream, if I want to first do img2img through a checkpoint, then feed it through detailer, then feed it through an upscale, and a detailer again, assuming those are all pre-prepared as modular node group flows, I will spend about a minute on wiring up at least 4 noodles to connect this sequence of sequential operations (that's optimistic because many of these modules will need multiple inputs but I'll also hand wave it as we can have custom pipe nodes to help us collapse their connections into single ones to help us out).
The flow from this point on probably involves rewiring the different modules, maybe heavy fiddling with settings in detailers and upscalers, etc. It's pretty efficient since seed has been fixed so each time we Generate only what changed needs to be recomputed.
OK got a good result after screwing around, Now to return from mode B to mode A:
Toggle seed increment back to increment or randomize in initial ksampler
Sever the connection from gallery output to detailed rendering flows. Reconnect output to Save Images or whatever
Now we can begin generating again
Now that I have constructed an overview of how this would work in the ideal case maybe I'm blowing the manual overhead out of proportion. But I really do feel like it's really tedious to switch back and forth. And I start to feel like the behavior of the Generate button and how it manipulates the seed needs some sort of work. Basically I think what we should have is the notion that some nodes (such as the hypothetical gallery image picker I described above) can be made to auto trigger execution down its nodes when manipulated. I am not sure but I suspect this capability is not possible because the Generate button must be clicked for any node evaluation to proceed.
This way, we could keep the generate button dedicated for compo bashing while the magic gallery image picker is the entry point for detailer flows coming off of it. It should be a custom node that is an image picker that also has a generate button that applies to the flows downstream.
The alternative to all of this would be instead of trying to do it all in one workflow just have them as separate workflows. I just don't know how to make this efficient but the idea is to just make larger compobashing batches, choose the ones we want to enhance further using an out of band tool, and then come up with a way with script or manually load the chosen ones as inputs for a completely separate workflow. It's probably a more straightforward approach for attaining efficiency but I think it requires far more discipline than I'll have. I'm gonna be too curious about what i can achieve with some gens as soon as I see them. Since it's such a creative process, the order of operations matters too much here to dismiss out of hand.
Another alternative is to have automations or custom nodes or something that can let us automate all those mode switching steps down into one trigger somehow. So we could have special noodles or special pairs of nodes that effectively implement "automated noodles".
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Got a question for you experts. Now I've been messing around and I know my way around but I find that there is a problem with scale and I want to ask what the best solutions are today for dealing with this issue of reconciling two types of workflow because the workflows feed into each other. So far though it seems like we have to manually manage chosen images and import them across separate workflows.
So in the course of generating images:
First, we want to use a high speed (turbo & LCM SDXL, or 1.5 with LCM Lora) workflow to generate many images at multiple images per second.
Then, I would like to have some sort of a picker node where I can click in the gallery of outputs any that I like, that I'll spend more compute feeding through: detailer flows, or maybe send through a non-LCM checkpoint img2img flow, then optionally detailers again, optionally upscalers, etc.
Basically we have separate workflows going on, there's the initial composition bashing step (mode A: rapid generation) where we can generate hundreds of outputs per minute, in large batches.
Then there is detailed and could be extremely complex flows to work off of the compositions we found that we like as starting points. These (mode B: detailed rendering) flows take much longer to run.
The conflict is that I have not seen an effective method to switch a comfy workflow from mode A into mode B.
OK got a good result after screwing around, Now to return from mode B to mode A:
Now that I have constructed an overview of how this would work in the ideal case maybe I'm blowing the manual overhead out of proportion. But I really do feel like it's really tedious to switch back and forth. And I start to feel like the behavior of the Generate button and how it manipulates the seed needs some sort of work. Basically I think what we should have is the notion that some nodes (such as the hypothetical gallery image picker I described above) can be made to auto trigger execution down its nodes when manipulated. I am not sure but I suspect this capability is not possible because the Generate button must be clicked for any node evaluation to proceed.
This way, we could keep the generate button dedicated for compo bashing while the magic gallery image picker is the entry point for detailer flows coming off of it. It should be a custom node that is an image picker that also has a generate button that applies to the flows downstream.
The alternative to all of this would be instead of trying to do it all in one workflow just have them as separate workflows. I just don't know how to make this efficient but the idea is to just make larger compobashing batches, choose the ones we want to enhance further using an out of band tool, and then come up with a way with script or manually load the chosen ones as inputs for a completely separate workflow. It's probably a more straightforward approach for attaining efficiency but I think it requires far more discipline than I'll have. I'm gonna be too curious about what i can achieve with some gens as soon as I see them. Since it's such a creative process, the order of operations matters too much here to dismiss out of hand.
Another alternative is to have automations or custom nodes or something that can let us automate all those mode switching steps down into one trigger somehow. So we could have special noodles or special pairs of nodes that effectively implement "automated noodles".
Beta Was this translation helpful? Give feedback.
All reactions