-
Notifications
You must be signed in to change notification settings - Fork 231
Closed
Description
Hello, I would like to ask for some advice. We have a 5-node riak kv nodes in ring installed in a kubernetes cluster. About 500 GB of data is stored on each node, but there are also butch of corrupted keys. We use "leveldb" as the backend and have enabled tictacaae.
storage_backend = leveldb
anti_entropy = passive
tictacaae_active = active
- The first problem is related to memory consumption in the K8s environment. In our case with corrupted keys, we have the opinion that riak make is trying to restore them, but it uses all the requested memory and goes beyond the pod memory limits. With this behavior regarding memory consumption, the pod receives an OOM Killer signal. Is it possible to set memory limits for the riak application? We tried several workarounds related to the leveldb.maximum_memory.percentage or level db.maximum_memory settings, but without any result.
- Is there a scenario for some kind of "forced" key recovery? How do I track the progress of the "recovery"? Looks like riak doing something but we don't have control on it and can't make any influence.
Thanks in advance, I am ready to provide any additional information.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels