@@ -51,7 +51,7 @@ The dual-store architecture allows you to use different store types for source a
5151such as a remote store for source data and a local store for persistent caching.
5252
5353Performance Benefits
54- -------------------
54+ --------------------
5555
5656The CacheStore provides significant performance improvements for repeated data access:
5757
@@ -70,7 +70,8 @@ The CacheStore provides significant performance improvements for repeated data a
7070 ... _ = zarr_array_nocache[:]
7171 >>> elapsed_nocache = time.time() - start
7272 >>>
73- >>> print (f " Speedup: { elapsed_nocache/ elapsed_cache:.2f } x " )
73+ >>> # Cache provides speedup for repeated access
74+ >>> speedup = elapsed_nocache / elapsed_cache # doctest: +SKIP
7475
7576Cache effectiveness is particularly pronounced with repeated access to the same data chunks.
7677
@@ -103,7 +104,7 @@ to the same chunk will be served from the local cache, providing dramatic speedu
103104The cache persists between sessions when using a LocalStore for the cache backend.
104105
105106Cache Configuration
106- ------------------
107+ -------------------
107108
108109The CacheStore can be configured with several parameters:
109110
@@ -156,7 +157,7 @@ The CacheStore can be configured with several parameters:
156157 ... )
157158
158159Cache Statistics
159- ---------------
160+ ----------------
160161
161162The CacheStore provides statistics to monitor cache performance and state:
162163
@@ -166,28 +167,38 @@ The CacheStore provides statistics to monitor cache performance and state:
166167 >>>
167168 >>> # Get comprehensive cache information
168169 >>> info = cached_store.cache_info()
169- >>> print (f " Cache store type: { info[' cache_store_type' ]} " )
170- >>> print (f " Max age: { info[' max_age_seconds' ]} seconds " )
171- >>> print (f " Max size: { info[' max_size' ]} bytes " )
172- >>> print (f " Current size: { info[' current_size' ]} bytes " )
173- >>> print (f " Tracked keys: { info[' tracked_keys' ]} " )
174- >>> print (f " Cached keys: { info[' cached_keys' ]} " )
175- >>> print (f " Cache set data: { info[' cache_set_data' ]} " )
170+ >>> info[' cache_store_type' ] # doctest: +SKIP
171+ 'MemoryStore'
172+ >>> isinstance (info[' max_age_seconds' ], (int , str ))
173+ True
174+ >>> isinstance (info[' max_size' ], (int , type (None )))
175+ True
176+ >>> info[' current_size' ] >= 0
177+ True
178+ >>> info[' tracked_keys' ] >= 0
179+ True
180+ >>> info[' cached_keys' ] >= 0
181+ True
182+ >>> isinstance (info[' cache_set_data' ], bool )
183+ True
176184
177185The `cache_info() ` method returns a dictionary with detailed information about the cache state.
178186
179187Cache Management
180- ---------------
188+ ----------------
181189
182190The CacheStore provides methods for manual cache management:
183191
184192 >>> # Clear all cached data and tracking information
185- >>> await cached_store.clear_cache()
193+ >>> import asyncio
194+ >>> asyncio.run(cached_store.clear_cache()) # doctest: +SKIP
186195 >>>
187- >>> # Check cache info after clearing
188- >>> info = cached_store.cache_info()
189- >>> print (f " Tracked keys after clear: { info[' tracked_keys' ]} " ) # Should be 0
190- >>> print (f " Current size after clear: { info[' current_size' ]} " ) # Should be 0
196+ >>> # Check cache info after clearing
197+ >>> info = cached_store.cache_info() # doctest: +SKIP
198+ >>> info[' tracked_keys' ] == 0 # doctest: +SKIP
199+ True
200+ >>> info[' current_size' ] == 0 # doctest: +SKIP
201+ True
191202
192203The `clear_cache() ` method is an async method that clears both the cache store
193204(if it supports the `clear ` method) and all internal tracking data.
@@ -249,7 +260,7 @@ The dual-store architecture provides flexibility in choosing the best combinatio
249260of source and cache stores for your specific use case.
250261
251262Examples from Real Usage
252- -----------------------
263+ ------------------------
253264
254265Here's a complete example demonstrating cache effectiveness:
255266
@@ -270,24 +281,20 @@ Here's a complete example demonstrating cache effectiveness:
270281 >>> zarr_array[:] = np.random.random((100 , 100 ))
271282 >>>
272283 >>> # Demonstrate cache effectiveness with repeated access
273- >>> print (" First access (cache miss):" )
274284 >>> start = time.time()
275- >>> data = zarr_array[20 :30 , 20 :30 ]
285+ >>> data = zarr_array[20 :30 , 20 :30 ] # First access (cache miss)
276286 >>> first_access = time.time() - start
277287 >>>
278- >>> print (" Second access (cache hit):" )
279288 >>> start = time.time()
280- >>> data = zarr_array[20 :30 , 20 :30 ] # Same data should be cached
289+ >>> data = zarr_array[20 :30 , 20 :30 ] # Second access (cache hit)
281290 >>> second_access = time.time() - start
282291 >>>
283- >>> print (f " First access time: { first_access:.4f } s " )
284- >>> print (f " Second access time: { second_access:.4f } s " )
285- >>> print (f " Cache speedup: { first_access/ second_access:.2f } x " )
286- >>>
287292 >>> # Check cache statistics
288293 >>> info = cached_store.cache_info()
289- >>> print (f " Cached keys: { info[' cached_keys' ]} " )
290- >>> print (f " Current cache size: { info[' current_size' ]} bytes " )
294+ >>> info[' cached_keys' ] > 0 # Should have cached keys
295+ True
296+ >>> info[' current_size' ] > 0 # Should have cached data
297+ True
291298
292299This example shows how the CacheStore can significantly reduce access times for repeated
293300data reads, particularly important when working with remote data sources. The dual-store
0 commit comments