Using SD webui dynamically via command line? #3639
Replies: 3 comments 2 replies
-
This is what the work towards a stable API is all for, https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/master/modules/api you can start the api up by either doing |
Beta Was this translation helpful? Give feedback.
-
@simcop2387 Ui! That sounds wonderful! I will check this out today! |
Beta Was this translation helpful? Give feedback.
-
@simcop2387 I am afraid that I need some more help. I started the program with python webui.py --xformers --nowebui Then started a py file containing
Then I get in the (first) command box INFO: Started server process [2684] What am I doing wrong? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
I cannot really believe that I switched last night from Manjaro Linux to Windows to get a huge performance improvement but indeed, this wonderful project made my day! (and my week, and my month..) Ironically I actually did not want a GUI for working with SD. I need a tool to feed SD dynamically with prompts. In the last weeks I had a python script running with a loop inside which several times per second checked a mysql database if a new job is to to. If so, the python script grabed the new prompt (+width, height), made its job and wait for the next job then. I had two graphic cars and both were working full time. (so they did not really "wait" for the next job, there were tens of thousends in the database). But the only way to bring my new 4090s to run fast was the @AUTOMATIC1111 magic code. I hoped that I can use it it the way I described. Means: Instead of entering a prompt and clicking the "generate" button I want to inject this commands in any way from outside. The "listen" parameter seems not to be the working way, if I am not wrong here. So my question is (not very surprisingly for sure): Does this work, and if, how?
Best regards
Marc
P.S.: When working with the 3090s I realized a disproportionately increase of processing time if I enlarged the image size. I.E. 1024x512 wasn't double time but trible time. Since I am working with the SD webui and the 4090s it's the other way round: 1024x512 takes only about 30% more time than 512x512. fascinating! With 512x512 I count about 21 it/s. I know, some guys here reported about 25 it/s. I hope I also can find a way to this performance. No idea how, because all seems rather standardized (same cuda, same cudnn, same Python, PyTorch etc) but I will test a bit around. It's such a pleasure to play around with it!
Beta Was this translation helpful? Give feedback.
All reactions