Skip to content

Commit a44219c

Browse files
committed
LRU Cache (Least Recently Used)
Signed-off-by: https://github.com/Someshdiwan <[email protected]>
1 parent caa4482 commit a44219c

File tree

4 files changed

+383
-0
lines changed

4 files changed

+383
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
Cache and LRU Cache — Theory
2+
3+
1. What is Cache?
4+
-----------------
5+
- A **cache** is a temporary storage layer that stores frequently accessed data so it can be retrieved faster in the future.
6+
- Goal: **Reduce latency** (time to access data) and **improve performance** by avoiding repeated computation or slow I/O operations.
7+
8+
💡 Examples of Cache:
9+
- Browser cache → stores images, CSS, JavaScript to load websites faster.
10+
- CPU cache → keeps recently used instructions/data closer to processor.
11+
- Database cache → stores query results for faster retrieval.
12+
13+
---
14+
15+
2. Why Caching is Needed?
16+
-------------------------
17+
- Accessing **RAM** is faster than accessing **Disk**.
18+
- Accessing **CPU cache (L1/L2)** is faster than **RAM**.
19+
- Cache reduces the gap between **fast components** (CPU) and **slow components** (Disk/Network).
20+
21+
---
22+
23+
3. What is LRU Cache?
24+
---------------------
25+
- **LRU = Least Recently Used**.
26+
- A cache replacement algorithm that decides which data to remove when the cache is full.
27+
- Idea:
28+
- Keep the **most recently used** data in cache.
29+
- Remove the **least recently used** data when new space is needed.
30+
31+
---
32+
33+
4. Origin and History
34+
---------------------
35+
- The concept of LRU was first studied in **paging and memory management research** in the 1960s–1970s.
36+
- It became popular in **operating systems (virtual memory)** to decide which memory page to replace when physical RAM was full.
37+
- Later adopted in **databases, web systems, and CPU design**.
38+
39+
---
40+
41+
5. Why Use LRU Cache?
42+
---------------------
43+
- Real-world workloads show **temporal locality**:
44+
- If data is accessed now, it is likely to be accessed again soon.
45+
- LRU captures this principle: recently used = likely to be reused.
46+
- Helps balance **hit rate (cache effectiveness)** and **implementation simplicity**.
47+
48+
---
49+
50+
6. How LRU Works (Conceptual)
51+
------------------------------
52+
- Cache has limited size.
53+
- When new data is inserted:
54+
- If not full → add it.
55+
- If full → evict the **least recently accessed** item.
56+
- Recent accesses push data to the "top" of usage order.
57+
- Old, unused data is evicted.
58+
59+
---
60+
61+
7. Applications of LRU Cache
62+
-----------------------------
63+
64+
### 🖥️ Computer Systems:
65+
- **CPU Caches (L1, L2, L3):**
66+
- Store recently used instructions and data.
67+
- **Operating Systems (Paging):**
68+
- When RAM is full, OS evicts least recently used memory pages.
69+
70+
### 🌐 Web and Internet:
71+
- **Web Browsers:**
72+
- Store recently visited pages, images, stylesheets.
73+
- **Content Delivery Networks (CDN):**
74+
- Cache popular content closer to users.
75+
76+
### 🗄️ Databases:
77+
- **Query Caching:**
78+
- Store frequently executed SQL results.
79+
- **Index Caching:**
80+
- Speed up database lookups.
81+
82+
### 📱 Mobile Apps:
83+
- **In-app caches:**
84+
- E.g., store profile pictures, news feed, or videos for quick reload.
85+
86+
### 🚗 Real-World Analogies:
87+
- Your **wallet** acts like a cache:
88+
- You keep frequently used items (cash, cards) handy.
89+
- Rarely used items stay in a bigger "storage" (locker/bank).
90+
- **Refrigerator**:
91+
- Keeps items you use often (milk, eggs) close at hand.
92+
- Rarely used groceries go in a deep freezer (slower to access).
93+
94+
---
95+
96+
8. Advantages of LRU
97+
--------------------
98+
✔ Simple and intuitive.
99+
✔ Good performance for workloads with temporal locality.
100+
✔ Used widely in hardware and software systems.
101+
102+
---
103+
104+
9. Limitations of LRU
105+
---------------------
106+
❌ Not always optimal → fails for "sequential scans" (where every access is new).
107+
❌ Needs extra data structures (linked list, hashmap) to implement efficiently.
108+
❌ Can be memory intensive for large caches.
109+
110+
---
111+
112+
🔑 Summary
113+
----------
114+
- **Cache** = fast temporary storage for frequently used data.
115+
- **LRU Cache** = removes the least recently used item when full.
116+
- Foundational concept in **computer science, operating systems, and modern applications**.
117+
- Used everywhere → from **CPUs** to **web apps** to **mobile apps**.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
LRU Cache (Least Recently Used)
2+
3+
1. What is LRU Cache?
4+
---------------------
5+
- **LRU = Least Recently Used**.
6+
- It is a cache mechanism where:
7+
- When the cache is **full** and a new item must be added,
8+
- The **least recently accessed (used) item** is evicted (removed).
9+
- Ensures that **frequently used items stay in memory** while older, unused items are removed.
10+
11+
Use cases:
12+
- Web browsers (back/forward pages cache).
13+
- Operating systems (page replacement algorithms).
14+
- Databases (query cache).
15+
- In-memory caches (like Redis or Guava Cache).
16+
17+
---
18+
19+
2. Core Principles
20+
------------------
21+
- Every time you **access (get/put)** an element:
22+
- That element becomes the **most recently used (MRU)**.
23+
- It is moved to the **end (tail)** of the internal list.
24+
- The **head** of the list always represents the **least recently used (LRU)** element.
25+
- On eviction → the head element is removed.
26+
27+
---
28+
29+
3. Internal Data Structures
30+
----------------------------
31+
Most implementations (like in Java using `LinkedHashMap`) use:
32+
33+
- **HashMap**: For O(1) access (key → value lookup).
34+
- **Doubly Linked List**: To maintain the usage order (head = LRU, tail = MRU).
35+
36+
So:
37+
- HashMap gives **fast access**.
38+
- Doubly Linked List gives **fast eviction/movement**.
39+
40+
---
41+
42+
4. ASCII Flow Example
43+
----------------------
44+
45+
Capacity = 3
46+
47+
```
48+
Start: [ ] (empty)
49+
```
50+
51+
👉 Insert(1:A)
52+
```
53+
Order: 1:A
54+
```
55+
56+
👉 Insert(2:B)
57+
```
58+
Order: 1:A → 2:B
59+
```
60+
61+
👉 Insert(3:C)
62+
```
63+
Order: 1:A → 2:B → 3:C
64+
```
65+
66+
👉 Access(1)
67+
```
68+
Move 1:A to end (MRU)
69+
Order: 2:B → 3:C → 1:A
70+
```
71+
72+
👉 Insert(4:D) → Cache full → Evict eldest (2:B)
73+
```
74+
Order: 3:C → 1:A → 4:D
75+
```
76+
77+
👉 Access(3)
78+
```
79+
Move 3:C to end
80+
Order: 1:A → 4:D → 3:C
81+
```
82+
83+
👉 Insert(5:E) → Cache full → Evict eldest (1:A)
84+
```
85+
Order: 4:D → 3:C → 5:E
86+
```
87+
88+
---
89+
90+
5. ASCII Diagram of Structure
91+
------------------------------
92+
93+
```
94+
HashMap (key → node reference):
95+
1 → Node1
96+
3 → Node3
97+
5 → Node5
98+
99+
Doubly Linked List (usage order):
100+
Head(LRU) → Node4:D ↔ Node3:C ↔ Node5:E ← Tail(MRU)
101+
```
102+
103+
- **Head = LRU** (first to be removed).
104+
- **Tail = MRU** (recently accessed).
105+
106+
---
107+
108+
6. Why LRU is Efficient
109+
------------------------
110+
- **O(1)** lookup (thanks to HashMap).
111+
- **O(1)** update of usage order (thanks to doubly linked list).
112+
- Auto-evicts oldest entries when full → making it perfect for caches.
113+
114+
---
115+
116+
🔑 Recap
117+
--------
118+
- LRU Cache = Keep most recently used, evict least recently used.
119+
- Implemented using **HashMap + Doubly Linked List** (or `LinkedHashMap` in Java).
120+
- **Head → Oldest**, **Tail → Newest**.
121+
- Used heavily in caching systems, databases, and memory management.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
LinkedHashMap as an LRU Cache
2+
3+
1. Background
4+
-------------
5+
- **LinkedHashMap** is a subclass of `HashMap` that maintains **insertion order** or **access order**.
6+
- By default, it keeps keys in the order they were inserted.
7+
- But if you enable `accessOrder = true`, the map will reorder entries every time they are accessed (get/put).
8+
- This behavior makes it ideal to implement an **LRU (Least Recently Used) Cache**.
9+
10+
---
11+
12+
2. The Two Key Ingredients
13+
--------------------------
14+
15+
### (a) Access Order
16+
- Constructor of LinkedHashMap has this signature:
17+
18+
```java
19+
LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)
20+
```
21+
22+
- If `accessOrder = true`:
23+
- Every time you call `get(key)` or `put(key, value)`, that entry is **moved to the end** of the linked list.
24+
- The "eldest" entry (least recently accessed) stays at the **front**.
25+
26+
---
27+
28+
### (b) removeEldestEntry()
29+
- LinkedHashMap provides a method you can override:
30+
31+
```java
32+
protected boolean removeEldestEntry(Map.Entry<K,V> eldest)
33+
```
34+
35+
- By default → always returns `false` (no auto-removal).
36+
- But if you override it to return `true` when size exceeds capacity, the **oldest (least recently used)** entry is automatically removed.
37+
38+
---
39+
40+
3. How LRU Works with LinkedHashMap
41+
-----------------------------------
42+
43+
Imagine a cache with capacity = 3, accessOrder = true:
44+
45+
```
46+
Put(A) → [A]
47+
Put(B) → [A, B]
48+
Put(C) → [A, B, C]
49+
Get(A) → moves A to end → [B, C, A]
50+
Put(D) → capacity exceeded → eldest B removed → [C, A, D]
51+
```
52+
53+
👉 The **eldest (front)** is always the least recently used.
54+
55+
---
56+
57+
4. ASCII Flow Diagram
58+
----------------------
59+
60+
### Initial State:
61+
```
62+
[ A ] → [ B ] → [ C ] (accessOrder = true)
63+
Front (eldest) End (most recent)
64+
```
65+
66+
### After get(A):
67+
```
68+
[ B ] → [ C ] → [ A ]
69+
```
70+
71+
### After put(D) (capacity 3):
72+
```
73+
[ C ] → [ A ] → [ D ]
74+
(B removed as eldest)
75+
```
76+
77+
---
78+
79+
5. Why is This Essentially an LRU Cache?
80+
----------------------------------------
81+
- LRU = "remove the least recently used item when adding new items beyond capacity."
82+
- LinkedHashMap does this automatically:
83+
1. **Tracks recency of use** with `accessOrder = true`.
84+
2. **Evicts oldest** entry using overridden `removeEldestEntry()`.
85+
86+
---
87+
88+
6. Typical Example (LRU using LinkedHashMap)
89+
--------------------------------------------
90+
91+
```java
92+
class LRUCache<K, V> extends LinkedHashMap<K, V> {
93+
private final int capacity;
94+
95+
public LRUCache(int capacity) {
96+
super(capacity, 0.75f, true); // accessOrder = true
97+
this.capacity = capacity;
98+
}
99+
100+
@Override
101+
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
102+
return size() > capacity; // remove eldest if size exceeds capacity
103+
}
104+
}
105+
```
106+
107+
👉 This class is a **ready-to-use LRU Cache**.
108+
109+
---
110+
111+
7. Real-world Applications
112+
--------------------------
113+
- Browser tab history (least recently opened tab closed first).
114+
- Database query caching.
115+
- File system caches.
116+
- Memory-sensitive caches in web servers and frameworks.
117+
118+
---
119+
120+
🔑 Summary
121+
----------
122+
- **LinkedHashMap + accessOrder = true** = maintains recency of access.
123+
- **removeEldestEntry()** = automatically removes least recently used.
124+
- Together, they provide a **built-in, easy LRU Cache implementation** without needing custom data structures.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
📊 LinkedHashMap vs LinkedHashMap as LRU Cache
2+
==============================================
3+
4+
| Feature | Plain LinkedHashMap | LinkedHashMap as LRU Cache |
5+
|--------------------------------|------------------------------------------------|--------------------------------------------------|
6+
| **Ordering Mode** | Insertion order by default | Access order (`accessOrder = true`) |
7+
| **Reordering on Access** | ❌ No reordering (keys stay in inserted order) | ✅ Keys move to the end when accessed/updated |
8+
| **Eviction Policy** | ❌ None (entries remain until removed manually) | ✅ `removeEldestEntry()` can evict least recent |
9+
| **removeEldestEntry()** | Default → always `false` | Overridden → returns `true` if size > capacity |
10+
| **Use Case** | When you just need predictable iteration order | When you need an **LRU cache** |
11+
| **Example Behavior** | Put(A), Put(B), Put(C) → [A, B, C] | Put(A), Put(B), Put(C), Get(A), Put(D) → [C, A, D] |
12+
| **Memory Management** | No auto-cleanup | Oldest (least recently used) entries auto-removed|
13+
| **Performance** | O(1) for get/put | O(1) for get/put + eviction check (still O(1)) |
14+
15+
---
16+
17+
✅ **Key Insight:**
18+
- **Plain LinkedHashMap** = maintains *insertion order*.
19+
- **LinkedHashMap with `accessOrder=true` + `removeEldestEntry()`** = behaves like an **LRU Cache**.
20+
21+
👉 That’s why **LinkedHashMap is the backbone of many Java cache implementations**.

0 commit comments

Comments
 (0)