|
49 | 49 | * Eviction Manager for the Out-Of-Core (OOC) stream cache. |
50 | 50 | * <p> |
51 | 51 | * This manager implements a high-performance, thread-safe buffer pool designed |
52 | | - * to handle intermediate results that exceed available heap memory. It builds on |
| 52 | + * to handle intermediate results that exceed available heap memory. It employs |
53 | 53 | * a <b>partitioned eviction</b> strategy to maximize disk throughput and a |
54 | 54 | * <b>lock-striped</b> concurrency model to minimize thread contention. |
55 | 55 | * |
56 | | - * <h3>1. Purpose</h3> |
| 56 | + * <h2>1. Purpose</h2> |
57 | 57 | * Provides a bounded cache for {@code MatrixBlock}s produced and consumed by OOC |
58 | 58 | * streaming operators (e.g., {@code tsmm}, {@code ba+*}). When memory pressure |
59 | 59 | * exceeds a configured limit, blocks are transparently evicted to disk and restored |
60 | | - * on demand. |
| 60 | + * on demand, allowing execution of operations larger than RAM. |
61 | 61 | * |
62 | | - * <h3>2. Lifecycle Management</h3> |
| 62 | + * <h2>2. Lifecycle Management</h2> |
63 | 63 | * Blocks transition atomically through three states to ensure data consistency: |
64 | 64 | * <ul> |
65 | 65 | * <li><b>HOT:</b> The block is pinned in the JVM heap ({@code value != null}).</li> |
|
69 | 69 | * to free memory, but the container (metadata) remains in the cache map.</li> |
70 | 70 | * </ul> |
71 | 71 | * |
72 | | - * <h3>3. Eviction Strategy (Partitioned I/O)</h3> |
| 72 | + * <h2>3. Eviction Strategy (Partitioned I/O)</h2> |
73 | 73 | * To mitigate I/O thrashing caused by writing thousands of small blocks: |
74 | 74 | * <ul> |
75 | 75 | * <li>Eviction is <b>partition-based</b>: Groups of "HOT" blocks are gathered into |
|
79 | 79 | * evicted block, allowing random-access reloading.</li> |
80 | 80 | * </ul> |
81 | 81 | * |
82 | | - * <h3>4. Data Integrity (Re-hydration)</h3> |
| 82 | + * <h2>4. Data Integrity (Re-hydration)</h2> |
83 | 83 | * To prevent index corruption during serialization/deserialization cycles, this manager |
84 | 84 | * uses a "re-hydration" model. The {@code IndexedMatrixValue} container is <b>never</b> |
85 | 85 | * removed from the cache structure. Eviction only nulls the data payload. Loading |
86 | 86 | * restores the data into the existing container, preserving the original {@code MatrixIndexes}. |
87 | 87 | * |
88 | | - * <h3>5. Concurrency Model (Fine-Grained Locking)</h3> |
| 88 | + * <h2>5. Concurrency Model (Fine-Grained Locking)</h2> |
89 | 89 | * <ul> |
90 | 90 | * <li><b>Global Structure Lock:</b> A coarse-grained lock ({@code _cacheLock}) guards |
91 | 91 | * the {@code LinkedHashMap} structure against concurrent insertions, deletions, |
92 | 92 | * and iteration during eviction selection.</li> |
93 | 93 | * |
94 | 94 | * <li><b>Per-Block Locks:</b> Each {@code BlockEntry} owns an independent |
95 | 95 | * {@code ReentrantLock}. This decouples I/O operations, allowing a reader to load |
96 | | - * "Block A" from disk while the evictor simultaneously writes "Block B" to disk, |
| 96 | + * "Block A" from disk while the evictor writes "Block B" to disk simultaneously, |
97 | 97 | * maximizing throughput.</li> |
98 | 98 | * |
99 | 99 | * <li><b>Condition Queues:</b> To handle read-write races, the system uses atomic |
|
0 commit comments