Replies: 1 comment
-
Hello , Thanks for your question. re:
It totally depends on your use case. If that batch jobs affects the database in any way, then you can have them outside. re:
Hasura provide cron triggers feature which can be managed from hasura console. The cron events data gets stored under Further, if you feel writing cron in go can be better according your use case, you can go ahead with that. Hasura cron trigger hits an endpoint every time its been triggered so you can also have your go endpoints (as you mentioned) and apply them as endpoint. It's upon you how flexible you want your structure. Also if you can you elaborate on "is it better to simply trigger them directly from their code" then it would be better to answer them too. Thanks again for dropping here in discussion. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
For the business logic I'm working on, I need to implement two types of batch jobs:
Type 1.
I'm thinking of using pg_cron, also to avoid using any other database external service.
Type 2.
Since I'm already exposing the PostgreSQL database with Hasura GraphQL engine, I'm evaluating Hasura Cron Triggers to implement those custom triggers. This could be fine for my structure, because I already have some basic dockerized Gin Go endpoints running in parallel with Hasura, hence it makes sense to also add other endpoints triggered by Hasura Cron Triggers.
However, I'm also evaluating writing cron jobs directly in Go with the help of go-quartz scheduling library or scheduling them in the container using Linux cron directly.
What are your suggestion on the overall structure?
Thinking also from a CI/CD perspective (change scheduling on the fly, etc.):
Beta Was this translation helpful? Give feedback.
All reactions