[8.x] Http requests in jobs are not executed when using a daemon worker #34353
Unanswered
mattkingshott
asked this question in
General
Replies: 1 comment 1 reply
-
If it helps for your research. I found that many low level libraries attach to the destruct magic event in classes. I found this with implementing a similar to you project, but using SFTP as my communication layer. The library (phpseclib) disconnects from the server when the class is destructed which happens in a naturally sync process. Batch/async is normally a queue, so the class is never off-loaded from memory so thus never destructed. This manifests itself as a strange issue, which may crop up as a failed connection for one and never closed from another. If not related, then o well. My research |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Description:
I have long-running jobs (using a Redis queue driver) that send dozens of HTTP requests in sequence to a third-party API.
When these jobs are executed one at a time, they complete without issue. However, when these jobs are running in parallel, at some point in the process (sometimes at random, sometimes after the first job finishes), the remaining jobs simply freeze when preparing to execute the next HTTP request.
I have been able to confirm this by using
info()
and logging the action right before the request is made. The request does not time out and it does not fail with aConnectException
or aRequestException
despite using thethrow()
method. I've also tried supplying['debug' => true]
via thewithOptions()
method in order to get Guzzle to print out what it is doing. Unfortunately, when the freezing occurs, nothing is echoed.After discovering that the jobs worked correctly when run one at a time, I set the horizon config to only use a single process and the jobs were completed successfully. My next step was to manually launch a queue worker with two processes using
queue:work
, but I encountered the same problem. When that failed, I tried usingqueue:listen
with two processes, and the jobs completed successfully.Thinking that maybe something wasn't getting cleaned up, I modified the jobs to ensure that they used no
static
properties or static methods in other classes. Sadly, this did not address the issue.Unfortunately, it's not entirely clear where the issue is exactly. It could be with the Laravel HTTP client, it could be with Guzzle, or maybe cURL, or it could be elsewhere. Regardless, my guess is that since the problem only occurs in daemon processes, something isn't being released or something is being used up e.g. a connection limit of some kind.
I realise that this isn't likely to be an easy issue to fix, but after trying everything else, I'm not sure what else to do. If you require more details, please let me know.
Thanks!
Steps To Reproduce:
queue:work
(at least two processes are needed).Beta Was this translation helpful? Give feedback.
All reactions