Commit ec800d8
authored
Optimize fetch_all_users
The optimization replaces sequential async execution with concurrent execution using `asyncio.gather()`.
**Key Change:**
- **Original**: Fetches users one-by-one in a loop with `await fetch_user(user_id)`, blocking until each completes
- **Optimized**: Uses `asyncio.gather(*(fetch_user(user_id) for user_id in user_ids))` to launch all fetch operations concurrently
**Why This Creates a Speedup:**
The original code suffers from "false serialization" - it waits for each 0.0001-second database call to complete before starting the next one. With N users, total time is roughly N × 0.0001 seconds. The optimized version launches all fetch operations simultaneously, so total time becomes approximately max(fetch_times) ≈ 0.0001 seconds regardless of list size.
**Performance Impact:**
- **Runtime improvement**: 785% speedup (302ms → 34.1ms)
- **Throughput improvement**: 558% increase (1,862 → 12,250 operations/second)
The line profiler shows the optimization eliminates the expensive sequential loop overhead - the original `fetch_all_users` spent 96.3% of time waiting on individual fetch calls, while the optimized version completes in a single concurrent operation.
**Test Case Performance:**
The optimization excels with larger datasets - test cases with 100-500 users show the most dramatic improvements since they maximize the concurrency benefit. Small lists (1-10 users) still benefit but see smaller gains due to the fixed asyncio.sleep overhead.
This pattern is particularly valuable for I/O-bound operations like database queries, API calls, or file operations where the underlying operations can run independently.1 parent 9692e75 commit ec800d8
1 file changed
+1
-5
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
23 | 23 | | |
24 | 24 | | |
25 | 25 | | |
26 | | - | |
27 | | - | |
28 | | - | |
29 | | - | |
30 | | - | |
| 26 | + | |
0 commit comments