-
Notifications
You must be signed in to change notification settings - Fork 26
Description
During an update yesterday, the Tobira worker crashed. The search-index schema was bumped, which meant the worker, on startup, rebuilt the search index. So far so expected. But that operation failed (as would tobira search-index rebuild) as Tobira loads all events from the DB and then sends them all to Meilisearch. For the production instance yesterday, Meilisearch complained about the payload size being too large. Luckily we could just increase the limit to make everything work. But we are talking about GBs of JSON.
So yes, Tobira should use batching here, only sending one batch at a time. The batches might as well be very large -> the fewer updates to the Index, the faster can Meilisearch rebuild. But we should make sure that each intermediate JSON does not create memory problems and ideally doesn't run into the default limit by Meilisearch.