Skip to content

Commit 351503f

Browse files
DOC-5851 tutorial improved with TCE-style snippets and JSON calls
1 parent 544221a commit 351503f

File tree

12 files changed

+577
-789
lines changed

12 files changed

+577
-789
lines changed

content/develop/use-cases/cache-aside/_index.md

Lines changed: 48 additions & 153 deletions
Original file line numberDiff line numberDiff line change
@@ -49,141 +49,67 @@ Cache-aside is ideal for:
4949

5050
## Basic Implementation
5151

52-
Here's a simple cache-aside implementation in Python:
52+
Here's a simple cache-aside implementation in Python using Redis JSON:
5353

54-
```python
55-
import redis
56-
import json
57-
58-
# Connect to Redis
59-
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
60-
61-
def get_user(user_id):
62-
"""Get user data with cache-aside pattern."""
63-
cache_key = f'user:{user_id}'
64-
65-
# Step 1: Check cache
66-
cached_user = r.get(cache_key)
67-
if cached_user:
68-
print(f"Cache hit for {cache_key}")
69-
return json.loads(cached_user)
70-
71-
# Step 2: Cache miss - fetch from database
72-
print(f"Cache miss for {cache_key}")
73-
user = fetch_from_database(user_id) # Your database query
74-
75-
# Step 3: Store in cache with 1-hour TTL
76-
r.setex(cache_key, 3600, json.dumps(user))
77-
78-
# Step 4: Return data
79-
return user
80-
```
54+
{{< clients-example set="cache_aside_basic" step="connect" lang_filter="Python" />}}
55+
56+
The `CacheAsideManager` class provides a convenient way to manage cache-aside operations:
57+
58+
{{< clients-example set="cache_manager_class" step="init" lang_filter="Python" />}}
59+
60+
**Key Differences with Redis JSON:**
61+
- No need for `json.dumps()` or `json.loads()` - Redis JSON handles serialization
62+
- Use `r.json().get()` and `r.json().set()` for native JSON operations
63+
- Cleaner, more efficient code with native JSON support
8164

8265
## Cache Invalidation
8366

8467
When data changes, you need to invalidate the cache to prevent stale data:
8568

86-
```python
87-
def update_user(user_id, new_data):
88-
"""Update user and invalidate cache."""
89-
# Update database
90-
update_database(user_id, new_data)
91-
92-
# Invalidate cache
93-
cache_key = f'user:{user_id}'
94-
r.delete(cache_key)
95-
96-
# Next request will fetch fresh data
97-
```
69+
{{< clients-example set="cache_aside_basic" step="invalidate" lang_filter="Python" />}}
9870

9971
### Pattern-Based Invalidation
10072

10173
Invalidate multiple related keys at once:
10274

103-
```python
104-
def invalidate_user_cache(user_id):
105-
"""Invalidate all cache keys for a user."""
106-
# Delete all keys matching pattern
107-
pattern = f'user:{user_id}:*'
108-
for key in r.scan_iter(match=pattern):
109-
r.delete(key)
110-
```
75+
{{< clients-example set="cache_aside_utils" step="invalidate_pattern" lang_filter="Python" />}}
11176

11277
## TTL Management
11378

114-
Set appropriate time-to-live values for different data types:
79+
Set appropriate time-to-live values for different data types using Redis JSON:
11580

116-
```python
117-
# Short TTL for frequently changing data (5 minutes)
118-
r.setex('user:session:123', 300, session_data)
81+
{{< clients-example set="cache_aside_utils" step="set_ttl" lang_filter="Python" />}}
11982

120-
# Medium TTL for user profiles (1 hour)
121-
r.setex('user:profile:456', 3600, profile_data)
83+
You can also retrieve and refresh TTL values:
12284

123-
# Long TTL for reference data (24 hours)
124-
r.setex('product:catalog', 86400, catalog_data)
125-
```
85+
{{< clients-example set="cache_aside_utils" step="get_ttl" lang_filter="Python" />}}
86+
87+
{{< clients-example set="cache_aside_utils" step="refresh_ttl" lang_filter="Python" />}}
12688

12789
## Error Handling
12890

129-
Always handle cache failures gracefully:
91+
Always handle cache failures gracefully using Redis JSON. The `CacheAsideManager` class includes built-in error handling:
13092

131-
```python
132-
def get_user_safe(user_id):
133-
"""Get user with fallback to database on cache failure."""
134-
try:
135-
cache_key = f'user:{user_id}'
136-
cached_user = r.get(cache_key)
137-
if cached_user:
138-
return json.loads(cached_user)
139-
except redis.ConnectionError:
140-
print("Redis unavailable, falling back to database")
141-
except Exception as e:
142-
print(f"Cache error: {e}")
143-
144-
# Fallback: fetch from database
145-
return fetch_from_database(user_id)
146-
```
93+
{{< clients-example set="cache_manager_class" step="get_method" lang_filter="Python" />}}
94+
95+
This implementation automatically falls back to the data source if Redis is unavailable.
14796

14897
## Performance Metrics
14998

150-
Monitor cache effectiveness with hit/miss ratios:
99+
Monitor cache effectiveness with hit/miss ratios. The `CacheAsideManager` class tracks these metrics automatically:
151100

152101
```python
153-
class CacheMetrics:
154-
def __init__(self):
155-
self.hits = 0
156-
self.misses = 0
157-
158-
def record_hit(self):
159-
self.hits += 1
160-
161-
def record_miss(self):
162-
self.misses += 1
163-
164-
def hit_ratio(self):
165-
total = self.hits + self.misses
166-
return self.hits / total if total > 0 else 0
167-
168-
metrics = CacheMetrics()
169-
170-
def get_user_tracked(user_id):
171-
cache_key = f'user:{user_id}'
172-
cached_user = r.get(cache_key)
173-
174-
if cached_user:
175-
metrics.record_hit()
176-
return json.loads(cached_user)
177-
178-
metrics.record_miss()
179-
user = fetch_from_database(user_id)
180-
r.setex(cache_key, 3600, json.dumps(user))
181-
return user
182-
183-
# Check performance
184-
print(f"Hit ratio: {metrics.hit_ratio():.2%}")
102+
# After using cache_manager.get() multiple times
103+
hit_ratio = cache_manager.get_hit_ratio()
104+
print(f"Hit ratio: {hit_ratio:.2%}")
105+
print(f"Hits: {cache_manager.hits}, Misses: {cache_manager.misses}")
185106
```
186107

108+
Aim for an 80%+ hit ratio for optimal performance. If your hit ratio is lower, consider:
109+
- Increasing TTL values for stable data
110+
- Pre-warming the cache with frequently accessed data
111+
- Analyzing access patterns to identify optimization opportunities
112+
187113
## Best Practices
188114

189115
1. **Use appropriate TTLs** - Balance freshness vs. cache efficiency
@@ -197,56 +123,25 @@ print(f"Hit ratio: {metrics.hit_ratio():.2%}")
197123
## Common Pitfalls
198124

199125
### Cache Stampede
200-
When a popular cache entry expires, many concurrent requests hit the database:
126+
When a popular cache entry expires, many concurrent requests hit the database simultaneously, overwhelming it. Solutions include:
201127

202-
```python
203-
# Problem: Multiple requests fetch same data simultaneously
204-
# Solution: Use locks or probabilistic early expiration
205-
def get_user_with_lock(user_id):
206-
cache_key = f'user:{user_id}'
207-
lock_key = f'{cache_key}:lock'
208-
209-
cached_user = r.get(cache_key)
210-
if cached_user:
211-
return json.loads(cached_user)
212-
213-
# Try to acquire lock
214-
if r.set(lock_key, '1', nx=True, ex=10):
215-
try:
216-
user = fetch_from_database(user_id)
217-
r.setex(cache_key, 3600, json.dumps(user))
218-
return user
219-
finally:
220-
r.delete(lock_key)
221-
else:
222-
# Wait for lock holder to populate cache
223-
time.sleep(0.1)
224-
return get_user_with_lock(user_id)
225-
```
128+
- **Lock-based approach**: Use Redis `SET` with `NX` (only if not exists) to create a lock. Only the lock holder fetches from the database; others wait.
129+
- **Probabilistic early expiration**: Refresh cache entries before they expire based on a probability calculation.
130+
- **Stale-while-revalidate**: Serve stale data while refreshing in the background.
226131

227132
### Null Value Caching
228-
Cache null values to prevent repeated database queries:
133+
When a key doesn't exist in the database, cache a null marker to prevent repeated database queries:
229134

230-
```python
231-
def get_user_with_null_cache(user_id):
232-
cache_key = f'user:{user_id}'
233-
cached = r.get(cache_key)
234-
235-
if cached == 'NULL':
236-
return None # User doesn't exist
237-
238-
if cached:
239-
return json.loads(cached)
240-
241-
user = fetch_from_database(user_id)
242-
243-
if user is None:
244-
r.setex(cache_key, 300, 'NULL') # Cache null for 5 minutes
245-
else:
246-
r.setex(cache_key, 3600, json.dumps(user))
247-
248-
return user
249-
```
135+
- Store a special marker (e.g., `"NULL"`) in Redis with a short TTL (e.g., 5 minutes)
136+
- Check for this marker before querying the database
137+
- This prevents "cache misses" from repeatedly hitting the database for non-existent keys
138+
139+
### Other Common Issues
140+
141+
- **Inconsistent TTLs**: Different data types should have different TTLs based on how frequently they change
142+
- **Missing error handling**: Always handle Redis connection failures gracefully
143+
- **Inefficient invalidation**: Use pattern-based invalidation for related keys instead of individual deletes
144+
- **No monitoring**: Track hit ratios and cache performance metrics to identify optimization opportunities
250145

251146
## Next Steps
252147

0 commit comments

Comments
 (0)