Skip to content

Commit 5a888fe

Browse files
docs(rust): address PR review feedback — keep original defaults, fix retry table
Preserve existing JDBC-aligned defaults (max_retries=5, retry_delay=1500ms, max_chunks_in_memory=16) to avoid performance regressions per reviewer feedback. Fix design spec retry table: 401/403/404 counts only against max_refresh_retries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 1d902d7 commit 5a888fe

File tree

2 files changed

+10
-9
lines changed

2 files changed

+10
-9
lines changed

rust/spec/cloudfetch-pipeline-redesign.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ guaranteed 401/403 failure that would otherwise cost one retry.
209209
| Error type | Sleep before retry | Counts against `max_retries` | Counts against `max_refresh_retries` |
210210
|---|---|---|---|
211211
| Network / 5xx | Yes — `retry_delay * (attempt + 1)` | Yes | No |
212-
| 401 / 403 / 404 | No | Yes | Yes |
212+
| 401 / 403 / 404 | No | No | Yes |
213213
| Link proactively expired | No | No | Yes |
214214

215215
### Consumer (replaces `wait_for_chunk` + `next_batch`)
@@ -315,8 +315,9 @@ owned by exactly one worker at a time. Each `ChunkHandle` is owned by the consum
315315

316316
## Configuration
317317

318-
Three fields are **added** to `CloudFetchConfig` (matching C# `CloudFetchConfiguration`), one
319-
field is **removed**, and two defaults are **corrected** to match C#.
318+
Three fields are **added** to `CloudFetchConfig` (matching C# `CloudFetchConfiguration`) and one
319+
field is **removed**. Existing defaults are **preserved** (aligned with OSS JDBC) to avoid
320+
performance regressions; C# equivalents are noted for reference only.
320321

321322
### Fields Added
322323

@@ -339,9 +340,9 @@ field is **removed**, and two defaults are **corrected** to match C#.
339340

340341
| Field | Old use | New use | Default change? |
341342
|---|---|---|---|
342-
| `max_chunks_in_memory` | Manual `AtomicUsize` counter | `async_channel::bounded` + `mpsc::channel` capacity for both channels | **Yes: → 10** (C# equivalent: 200 MB ÷ 20 MB max chunk size) |
343-
| `max_retries` | Per-download retry limit | Unchanged | **Yes: 5 → 3** (align with C# `MaxRetries = 3`) |
344-
| `retry_delay` | Constant sleep | Linear backoff: `retry_delay * (attempt + 1)` — matches C# `RetryDelayMs * (retry + 1)` | **Yes: 1500ms → 500ms** (align with C# `RetryDelayMs = 500`) |
343+
| `max_chunks_in_memory` | Manual `AtomicUsize` counter | `async_channel::bounded` + `mpsc::channel` capacity for both channels | No — stays **16** (matches JDBC `cloudFetchThreadPoolSize`; C# uses ~10) |
344+
| `max_retries` | Per-download retry limit | Unchanged | No — stays **5** (matches JDBC; C# uses 3) |
345+
| `retry_delay` | Constant sleep | Linear backoff: `retry_delay * (attempt + 1)` — matches C# `RetryDelayMs * (retry + 1)` | No — stays **1500ms** (matches JDBC; C# uses 500ms) |
345346
| `link_prefetch_window` | Background prefetch ahead of consumer | Unchanged — owned by `SeaChunkLinkFetcher` | No |
346347
| `speed_threshold_mbps` | Download speed warning | Unchanged | No |
347348
| `enabled` | CloudFetch enable flag | Unchanged | No |

rust/spec/sprint-plan-cloudfetch-redesign.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,8 @@ Before implementation, the spec was audited against `csharp/src/Reader/CloudFetc
3838
| `url_expiration_buffer_secs` | Hardcoded constant `LINK_EXPIRY_BUFFER_SECS = 30` | Configurable field, default **60s** (matching C# `UrlExpirationBufferSeconds = 60`) |
3939
| `max_refresh_retries` | Referenced in spec but not defined | New `CloudFetchConfig` field, default **3** (matching C# `MaxUrlRefreshAttempts = 3`) |
4040
| `num_download_workers` | "N workers" with no source for N | New `CloudFetchConfig` field, default **3** (matching C# `ParallelDownloads = 3`) |
41-
| `max_retries` default | 5 | **3** (matching C# `MaxRetries = 3`) |
42-
| `retry_delay` default | 1500ms | **500ms** (matching C# `RetryDelayMs = 500`) |
41+
| `max_retries` default | 5 | **Keep 5** (aligned with OSS JDBC; C# uses 3 but we preserve existing default to avoid perf regression) |
42+
| `retry_delay` default | 1500ms | **Keep 1500ms** (aligned with OSS JDBC; C# uses 500ms but we preserve existing default to avoid perf regression) |
4343
| `chunk_ready_timeout` | Not mentioned for removal | **Removed** — only served the `wait_for_chunk` Notify loop (deleted); C# has no equivalent wait-layer timeout |
4444
| `max_refresh_retries` exhaustion behaviour | Continues sleeping through remaining `max_retries` budget before failing — exhaustion throws inside the `try` block, caught by `catch (Exception ex) when (retry < _maxRetries - 1 ...)`, so the worker sleeps `retryDelayMs * (retry + 1)` for each remaining retry before the exception finally escapes | **Immediately fails** — sends `Err` via `result_tx` without burning any remaining `max_retries` budget |
4545
| 404 retry behaviour | Not handled specially — `EnsureSuccessStatusCode()` throws, caught with sleep, counted against `max_retries` | Treated as a URL-expiry signal alongside 401/403: no sleep, refresh-and-retry, counted against `max_refresh_retries` only |
@@ -89,7 +89,7 @@ struct ChunkHandle {
8989
- `num_download_workers: usize` — default `3`
9090
- `url_expiration_buffer_secs: u32` — default `60`
9191
- Remove `chunk_ready_timeout: Option<Duration>`
92-
- Correct defaults: `max_retries` 5→3, `retry_delay` 1500ms→500ms
92+
- Keep existing defaults: `max_retries` stays **5**, `retry_delay` stays **1500ms** (aligned with OSS JDBC; no regression risk)
9393
- Remove `LINK_EXPIRY_BUFFER_SECS` constant (replaced by `url_expiration_buffer_secs`)
9494
- Remove `ChunkEntry` and `ChunkState` types entirely
9595
- Update `CloudFetchLink::is_expired()` to accept a buffer parameter (in seconds) instead of using the hardcoded constant

0 commit comments

Comments
 (0)