Skip to content

Riak_kv K8s memory consumption #1889

@azowarg

Description

@azowarg

Hello, I would like to ask for some advice. We have a 5-node riak kv nodes in ring installed in a kubernetes cluster. About 500 GB of data is stored on each node, but there are also butch of corrupted keys. We use "leveldb" as the backend and have enabled tictacaae.

storage_backend = leveldb
anti_entropy = passive
tictacaae_active = active
  1. The first problem is related to memory consumption in the K8s environment. In our case with corrupted keys, we have the opinion that riak make is trying to restore them, but it uses all the requested memory and goes beyond the pod memory limits. With this behavior regarding memory consumption, the pod receives an OOM Killer signal. Is it possible to set memory limits for the riak application? We tried several workarounds related to the leveldb.maximum_memory.percentage or level db.maximum_memory settings, but without any result.
  2. Is there a scenario for some kind of "forced" key recovery? How do I track the progress of the "recovery"? Looks like riak doing something but we don't have control on it and can't make any influence.

Thanks in advance, I am ready to provide any additional information.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions