Replies: 1 comment
-
same problem, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When I post to the API txt2img endpoint and a job is started the connection is killed after 90 seconds.
I know the problem stems from not having cuda support so it takes over 90 seconds to generate an image which is not something people typically have to worry about (I have searched far and wide online for similar issues), but I DO need to worry about it and it has been a massive pain.
I have spent 2 and a half days of suffering trying to find work arounds
If you would like to try and replicate this you can use the handy /docs API information page and post to the /txt2img endpoint from there with any settings that would take over my 90 seconds to complete on your machine and it will end up timing out before completion.
I don't know what to do about it, I've tried to dig around and find how the API is created and passing a custom timeout argument but have had no luck. I've also seen where enabling --gradio-queue should allow longer inferencing but that looks to be a different issue as it did not help
I was originally running on
43bb5190fc9e7ae479a5dc6640be202c9a71e464
but after splicing in the changes to add gradio-queue support and it not working I upgraded to8a34671fe91e142bce9e5556cca2258b3be9dd6e
(had heard the newest version was breaking stuff so found one a little older) hoping that it was just broken somewhere on mine, being older. But I'm still having the same problems and don't really know what else to try...I don't believe this is a bug as you normally would not be waiting for 90 seconds for a response, but it's a heavy request being made for my system and I need to think of a way around it.
I don't know how the api backend communicates with everything else, would it be possible to customize how the requests are handled or add a special key to them when started so I can search for them later with a different request?
Beta Was this translation helpful? Give feedback.
All reactions