Skip to content

Milestones

List view

  • ## Fixed-partition - Redis, kafka - Easiest - Lookup cost being O(1) - Most thoroughly examined ## Consistent hashing - Cassandra - virtual node required - fun to implement (harder) ### Mechanism Each replica set is assigned a random integer token (typically generated as hash of UUID) Client maps the key to the replica set as follows: - computes the hash of the key - gets the sorted list of all the available tokens, then searches for the smallest token value that is higher than the hash of the key - the list is taken as circular (ring), so any hash value which is greater than the last token in the list maps to the first token ### Diagram ```mermaid sequenceDiagram actor U as User Box Library #lightblue participant D as Client participant T as TopologyData end participant S as Server D -->S: Connection established D -->S : Observe Topology Change Note over S: TopologyMedata<br>S1: 1234<br>S2: 2345<br>S3: 5340 S --)D : Push Topology Change U->>D: set("key","value") D->>D: hash("key") -> 0x123 D->>T: get_nodde_for(0x123) T->>T: sort_tokens : [1234, 2345, 5340] T->>D: return_node(0x123) -> 1234 D->>S: set("key","value") -> 1234 S->>D: Ok D->>U: Ok ```

    Due by June 27, 2025
    22/22 issues closed
  • **Feature Request: Implement Configurable Eviction Policies ** Currently, the system/application does not have a mechanism to automatically manage memory when it reaches a certain limit. This can lead to: - Out-of-Memory errors under heavy load, - inefficient memory usage When used as a cache or with a memory limit, an effective eviction strategy is crucial to ensure the system remains stable and performs optimally under memory pressure. Without one, hitting memory limits can cause failures or degrade performance. I propose implementing a set of configurable memory eviction policies, similar to those found in Redis. These policies would automatically remove keys/data from memory when a configured `maxmemory` limit is reached, based on different criteria. This would provide robust memory management and allow users to tune the system's caching behavior based on their specific workload and data access patterns. The proposed policies could include, but are not limited to, the following, mirroring Redis's options: * **`noeviction`**: Do not evict anything. Return errors on write operations when the memory limit is reached. * **`allkeys-lru`**: Evict least recently used (LRU) keys regardless of whether they have an expire set. * **`volatile-lru`**: Evict least recently used (LRU) keys among those that have an expire set. * **`allkeys-lfu`**: Evict least frequently used (LFU) keys regardless of whether they have an expire set. * **`volatile-lfu`**: Evict least frequently used (LFU) keys among those that have an expire set. * **`allkeys-random`**: Evict random keys regardless of whether they have an expire set. * **`volatile-random`**: Evict random keys among those that have an expire set. * **`volatile-ttl`**: Evict keys with the shortest time-to-live (TTL) value among those that have an expire set. Users should be able to configure the desired eviction policy and the `maxmemory` limit. ## Acceptance criteria - tests codes - up to LRU - (Optional) computing resource checking(validation) logics on handshakes Looking forward to feedback and discussion on this proposal.

    Due by June 27, 2025
    2/2 issues closed
  • Due by April 30, 2025
    7/7 issues closed
  • ## Description When a node in the cluster fails, the system should automatically adjust the topology to maintain availability and consistency. This includes redistributing responsibilities, updating membership information, and ensuring that data replication or leader election processes continue without manual intervention. ## Expected Behavior - Detect node failure promptly. - Update the cluster membership to reflect the change. - Redistribute data or responsibilities among remaining nodes. - Trigger any necessary rebalancing operations. - Ensure consistency and avoid split-brain scenarios. ## Impact - Improved fault tolerance and high availability. - Minimized downtime and manual intervention. - Ensures data integrity and consistency.

    Due by May 1, 2025
    14/14 issues closed
  • Due by April 6, 2025
    1/1 issues closed
  • Using Raft consensus protocol, make sure of availability by implementing leader election process

    Due by March 23, 2025
    11/11 issues closed
  • Due by May 6, 2025
    30/30 issues closed
  • Replication requires the following implementations: - Node-to-node communication protocol set up - propagation strategy - failure detection algorithm

    Due by March 28, 2025
    39/39 issues closed