-
-
Notifications
You must be signed in to change notification settings - Fork 96
Added correct graceful shutdown. #381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #381 +/- ##
==========================================
+ Coverage 73.16% 77.40% +4.23%
==========================================
Files 62 62
Lines 1871 1890 +19
==========================================
+ Hits 1369 1463 +94
+ Misses 502 427 -75 ☔ View full report in Codecov by Sentry. |
|
|
|
@remixer-dec, can you please clarify your situation a bit? Because it's not clear to me how you try to track exits. |
|
@s3rius I had signal handling logic implemented in my worker script that saves the state when worker exits, it no longer works after I upgraded taskiq. import signal
import logging
logger = logging.getLogger(__name__)
logger.info("Worker active")
def on_worker_shutdown():
logger.info("Stop signal received")
# other custom logic
signal.signal(signal.SIGINT, lambda *args, **kwargs: on_worker_shutdown())
signal.signal(signal.SIGTERM, lambda *args, **kwargs: on_worker_shutdown())I'm trying to find other way to somehow catch the fact that the worker is being shutdown, @broker.on_event(TaskiqEvents.WORKER_SHUTDOWN)
async def on_worker_shutdown(state: TaskiqState) -> None:
logger.info("Stop signal received")
# other custom logic
# or like this
broker.add_event_handler(TaskiqEvents.WORKER_SHUTDOWN, on_worker_shutdown)Both doesn't seem to work. |
|
I downgraded and I'm still not getting intended behavior, maybe something else changed in the environment, I'll investigate further and will give more details once I find the reason, sorry for bothering you too soon. |
|
Don't worry. Please write me once you get what was happening. |
|
@s3rius I found the issue, the signals still work fine in the latest version of taskiq, the issue was in docker+bash signal delivery when a background process is provided in so in worker-service:
# ...
command: >
bash -c 'taskiq worker worker-async:broker -w 1'but this doesn't worker-service:
# ...
command: >
bash -c 'taskiq scheduler worker-async:scheduler & taskiq worker worker-async:broker -w 1'and signals are not getting delivered to worker. The solution to this is quite simple, use exec before running worker: worker-service:
# ...
command: >
bash -c 'taskiq scheduler worker-async:scheduler & exec taskiq worker worker-async:broker -w 1' |
|
Glad that the issue was resolved. I would also suggest to check on https://github.com/taskiq-python/examples/blob/master/fastapi-app/docker-compose.yml. This config works flawlessly. |
|
@s3rius oh, you have scheduler and worker in separate containers, maybe that's more stable in a long run, thanks! |
|
Yes! Also it helps scaling. Because you can scale workers separately. Btw, our company has opensourced helm charts for py-application. And we have different deployments for taskiq-scheduler.yaml and taskiq-worker.yaml. |
|
@remixer-dec, please upgrade to version 0.11.14. We found were was the issue. It was really hard to reproduce, but it was there. Thanks for pointing it out. You can check what was the issue in the Pull Request. |
Fixes #380
Fixes #325
This PR rewrites graceful shutdowns completely and now it should be able to gracefully wait for all tasks to complete and implement proper reloading.