Skip to content

Commit 32e2886

Browse files
committed
@GitHK review: doc and rename
1 parent 36e0ce5 commit 32e2886

File tree

4 files changed

+57
-71
lines changed

4 files changed

+57
-71
lines changed

services/api-server/src/simcore_service_api_server/repository/_base.py

Lines changed: 52 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,58 @@
33

44
from sqlalchemy.ext.asyncio import AsyncEngine
55

6-
DB_CACHE_TTL_SECONDS: Final = 120 # 2 minutes
6+
## Memory Usage of aiocache for Session Authentication Caching
7+
AUTH_SESSION_TTL_SECONDS: Final = 120 # 2 minutes
8+
"""
9+
### Memory Usage Characteristics
10+
11+
**aiocache** uses in-memory storage by default, which means:
12+
13+
1. **Linear memory growth**: Each cached item consumes RAM proportional to the serialized size of the cached data
14+
2. **No automatic memory limits**: By default, there's no built-in maximum memory cap
15+
3. **TTL-based cleanup**: Items are only removed when they expire (TTL) or are explicitly deleted
16+
17+
18+
**Key limitations:**
19+
- **MEMORY backend**: No built-in memory limits or LRU eviction
20+
- **Maximum capacity**: Limited only by available system RAM
21+
- **Risk**: Memory leaks if TTL is too long or cache keys grow unbounded
22+
23+
### Recommendations for Your Use Case
24+
25+
**For authentication caching:**
26+
27+
1. **Low memory impact**: User authentication data is typically small (user_id, email, product_name)
28+
2. **Short TTL**: Your 120s TTL helps prevent unbounded growth
29+
3. **Bounded key space**: API keys are finite, not user-generated
30+
31+
**Memory estimation:**
32+
```
33+
Per cache entry ≈ 200-500 bytes (user data + overhead)
34+
1000 active users ≈ 500KB
35+
10000 active users ≈ 5MB
36+
```
37+
38+
### Alternative Approaches
39+
40+
**If memory becomes a concern:**
41+
42+
1. **Redis backend**:
43+
```python
44+
cache = Cache(Cache.REDIS, endpoint="redis://localhost", ...)
45+
```
46+
47+
2. **Custom eviction policy**: Implement LRU manually or use shorter TTL
48+
49+
3. **Monitoring**: Track cache size in production:
50+
```python
51+
# Check cache statistics
52+
cache_stats = await cache.get_stats()
53+
```
54+
55+
**Verdict**:
56+
For authentication use case with reasonable user counts (<10K active), memory impact should be minimal with your current TTL configuration.
57+
"""
758

859

960
@dataclass

services/api-server/src/simcore_service_api_server/repository/api_keys.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
from simcore_postgres_database.utils_repos import pass_or_acquire_connection
1010
from sqlalchemy.ext.asyncio import AsyncConnection
1111

12-
from ._base import DB_CACHE_TTL_SECONDS, BaseRepository
12+
from ._base import AUTH_SESSION_TTL_SECONDS, BaseRepository
1313

1414
_logger = logging.getLogger(__name__)
1515

@@ -23,7 +23,7 @@ class ApiKeysRepository(BaseRepository):
2323
"""Auth access"""
2424

2525
@cached(
26-
ttl=DB_CACHE_TTL_SECONDS,
26+
ttl=AUTH_SESSION_TTL_SECONDS,
2727
key_builder=lambda *_args, **kwargs: f"api_auth:{kwargs['api_key']}",
2828
namespace=__name__,
2929
noself=True,

services/api-server/src/simcore_service_api_server/repository/users.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@
88
from simcore_postgres_database.utils_repos import pass_or_acquire_connection
99
from sqlalchemy.ext.asyncio import AsyncConnection
1010

11-
from ._base import DB_CACHE_TTL_SECONDS, BaseRepository
11+
from ._base import AUTH_SESSION_TTL_SECONDS, BaseRepository
1212

1313

1414
class UsersRepository(BaseRepository):
1515
@cached(
16-
ttl=DB_CACHE_TTL_SECONDS,
16+
ttl=AUTH_SESSION_TTL_SECONDS,
1717
key_builder=lambda *_args, **kwargs: f"user_email:{kwargs['user_id']}",
1818
cache=Cache.MEMORY,
1919
namespace=__name__,

services/api-server/tests/unit/_with_db/authentication/test_api_dependency_authentication.py

Lines changed: 1 addition & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -43,72 +43,7 @@ async def test_cache_effectiveness_in_rest_authentication_dependencies(
4343
users_repo: UsersRepository,
4444
mocker: MockerFixture,
4545
):
46-
"""Test that caching reduces database calls and improves performance.
47-
48-
## Memory Implications of aiocache
49-
50-
### Memory Usage Characteristics
51-
52-
**aiocache** uses in-memory storage by default, which means:
53-
54-
1. **Linear memory growth**: Each cached item consumes RAM proportional to the serialized size of the cached data
55-
2. **No automatic memory limits**: By default, there's no built-in maximum memory cap
56-
3. **TTL-based cleanup**: Items are only removed when they expire (TTL) or are explicitly deleted
57-
58-
### Memory Limits & Configuration
59-
60-
**Available configuration options:**
61-
62-
```python
63-
# Memory backend configuration
64-
cache = Cache(Cache.MEMORY, **{
65-
'serializer': {
66-
'class': 'aiocache.serializers.PickleSerializer'
67-
},
68-
# No built-in memory limit options for MEMORY backend
69-
})
70-
```
71-
72-
**Key limitations:**
73-
- **MEMORY backend**: No built-in memory limits or LRU eviction
74-
- **Maximum capacity**: Limited only by available system RAM
75-
- **Risk**: Memory leaks if TTL is too long or cache keys grow unbounded
76-
77-
### Recommendations for Your Use Case
78-
79-
**For authentication caching:**
80-
81-
1. **Low memory impact**: User authentication data is typically small (user_id, email, product_name)
82-
2. **Short TTL**: Your 120s TTL helps prevent unbounded growth
83-
3. **Bounded key space**: API keys are finite, not user-generated
84-
85-
**Memory estimation:**
86-
```
87-
Per cache entry ≈ 200-500 bytes (user data + overhead)
88-
1000 active users ≈ 500KB
89-
10000 active users ≈ 5MB
90-
```
91-
92-
### Alternative Approaches
93-
94-
**If memory becomes a concern:**
95-
96-
1. **Redis backend**:
97-
```python
98-
cache = Cache(Cache.REDIS, endpoint="redis://localhost", ...)
99-
```
100-
101-
2. **Custom eviction policy**: Implement LRU manually or use shorter TTL
102-
103-
3. **Monitoring**: Track cache size in production:
104-
```python
105-
# Check cache statistics
106-
cache_stats = await cache.get_stats()
107-
```
108-
109-
**Verdict**:
110-
For authentication use case with reasonable user counts (<10K active), memory impact should be minimal with your current TTL configuration.
111-
"""
46+
"""Test that caching reduces database calls and improves performance."""
11247

11348
# Generate a fake API key
11449
credentials = HTTPBasicCredentials(

0 commit comments

Comments
 (0)