docs(rust): add cloudfetch pipeline redesign spec and sprint plan#300
docs(rust): add cloudfetch pipeline redesign spec and sprint plan#300eric-wang-1990 merged 4 commits intomainfrom
Conversation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
| | Error type | Sleep before retry | Counts against `max_retries` | Counts against `max_refresh_retries` | | ||
| |---|---|---|---| | ||
| | Network / 5xx | Yes — `retry_delay * (attempt + 1)` | Yes | No | | ||
| | 401 / 403 / 404 | No | Yes | Yes | |
There was a problem hiding this comment.
Why 401/403/404 counts against max_retries? It should only counts to max_refresh_retries? What is CSharp ADBC behavior?
There was a problem hiding this comment.
✅ Fixed — updated the retry table so 401/403/404 counts only against max_refresh_retries, not max_retries.
C# behavior for reference: In the C# driver, 401/403 inadvertently increments the max_retries loop counter because the continue statement advances the for (int retry = 0; retry < _maxRetries; retry++) variable — but the explicit guard is RefreshAttempts >= _maxUrlRefreshAttempts. 404 is not specially handled in C# and falls through to EnsureSuccessStatusCode(), which throws and gets caught with a sleep delay.
For the Rust redesign we cleanly separate the two counters: 401/403/404 exhausting max_refresh_retries is a distinct failure mode from transient network errors exhausting max_retries. Added a note in the spec documenting the C# divergence.
This comment was generated with GitHub MCP.
There was a problem hiding this comment.
✅ Also updated the design spec (cloudfetch-pipeline-redesign.md) retry table — 401/403/404 now shows No for max_retries column, matching the sprint plan fix.
This comment was generated with GitHub MCP.
jadewang-db
left a comment
There was a problem hiding this comment.
the new bazel command does not need a planning parameter anymore in fact it read the design doc for the list of PRs and commits
vikrantpuppala
left a comment
There was a problem hiding this comment.
my biggest concern is wrt to the change in defaults for parallel downloads, max_chunks_in_memory, etc. these were configured inline with oss jdbc which has generally been the benchmark in terms of performance. to make sure these new defaults do not cause regressions in latencies, we should also perform benchmarking - go/oss-odbc-perf
|
@vikrantpuppala Good call — updated both spec files to keep the original defaults aligned with OSS JDBC:
C# equivalents are noted for reference only. This avoids any perf regression risk. We can tune later with benchmarking data from the dashboard you linked. This comment was generated with GitHub MCP. |
|
@jadewang-db Noted — thanks for the heads up on the bazel command change. This comment was generated with GitHub MCP. |
5a888fe to
c277090
Compare
…ign details Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
c277090 to
f36c42d
Compare
Summary
This is a design-only PR with no code changes.
Stacked PRs
🤖 Generated with Claude Code