|
85 | 85 | * removed from the cache structure. Eviction only nulls the data payload. Loading |
86 | 86 | * restores the data into the existing container, preserving the original {@code MatrixIndexes}. |
87 | 87 | * |
88 | | - * <h3>5. Concurrency Model (Lock-Striping)</h3> |
| 88 | + * <h3>5. Concurrency Model (Fine-Grained Locking)</h3> |
89 | 89 | * <ul> |
90 | | - * <li><b>Global Lock:</b> A coarse-grained lock guards the cache structure |
91 | | - * (LinkedHashMap) for insertions and deletions.</li> |
92 | | - * <li><b>Per-Block Locks:</b> Each cache entry has an independent {@code ReentrantLock}. |
93 | | - * This allows a reader to load Block A from disk while the evictor writes |
94 | | - * Block B to disk simultaneously.</li> |
95 | | - * <li><b>Wait/Notify:</b> Readers attempting to access a block in the {@code EVICTING} |
96 | | - * state will automatically block until the state transitions to {@code COLD}, |
97 | | - * preventing race conditions.</li> |
| 90 | + * <li><b>Global Structure Lock:</b> A coarse-grained lock ({@code _cacheLock}) guards |
| 91 | + * the {@code LinkedHashMap} structure against concurrent insertions, deletions, |
| 92 | + * and iteration during eviction selection.</li> |
| 93 | + * |
| 94 | + * <li><b>Per-Block Locks:</b> Each {@code BlockEntry} owns an independent |
| 95 | + * {@code ReentrantLock}. This decouples I/O operations, allowing a reader to load |
| 96 | + * "Block A" from disk while the evictor simultaneously writes "Block B" to disk, |
| 97 | + * maximizing throughput.</li> |
| 98 | + * |
| 99 | + * <li><b>Condition Queues:</b> To handle read-write races, the system uses atomic |
| 100 | + * state transitions. If a reader attempts to access a block in the {@code EVICTING} |
| 101 | + * state, it waits on the entry's {@code Condition} variable until the writer |
| 102 | + * signals that the block is safely {@code COLD} (persisted).</li> |
98 | 103 | * </ul> |
99 | 104 | */ |
100 | 105 |
|
|
0 commit comments