feat: max number of persistent operations per request#2477
Conversation
WalkthroughClient now publishes operations in batches of 100 with a CLI progress indicator and aggregates results; server enforces a 100-operation payload limit (returns ResourceExhausted) and tests verify rejection when sending 101 operations. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2477 +/- ##
==========================================
- Coverage 61.08% 55.91% -5.17%
==========================================
Files 229 423 +194
Lines 23894 52898 +29004
Branches 0 4642 +4642
==========================================
+ Hits 14596 29579 +14983
- Misses 8047 23293 +15246
+ Partials 1251 26 -1225
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cli/src/commands/operations/commands/push.ts`:
- Around line 167-199: Wrap the for loop that calls
opts.client.platform.publishPersistedOperations in a try/finally so the progress
bar is always stopped on error: move the existing for (let start = 0; start <
operations.length; start += OPERATION_BATCH_SIZE) { ... } into a try block and
in the finally call if (bar) bar.stop();; keep the per-chunk error handling
(result.response?.code check and command.error) inside the loop and rethrow or
let the thrown RPC error propagate so the finally cleans up; reference symbols:
bar, publishPersistedOperations, operations, OPERATION_BATCH_SIZE, processed,
publishedOperations.
- Around line 235-243: Replace the traditional for loop iterating with an index
variable by using a for...of with entries() to get both index and element;
specifically change the loop over publishedOperations so you use for (const [ii,
op] of publishedOperations.entries()) and then set returnedOperations[op.id] = {
hash: op.hash, contents: operations[ii].contents, status:
jsonOperationStatus(op.status), operationNames: op.operationNames ?? [] }; this
keeps access to both publishedOperations[ii] and operations[ii] while satisfying
the unicorn/no-for-loop rule and preserving existing behavior (symbols:
publishedOperations, operations, returnedOperations, jsonOperationStatus).
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cli/src/commands/operations/commands/push.ts`:
- Around line 192-195: The progress bar undercounts because it increments by
result.operations.length which can be smaller than the sent chunk if the backend
deduplicates; update the progress using the number of inputs attempted (e.g.,
the local chunk/batch length) instead of result.operations.length. Locate the
block where publishedOperations, processed, and bar.update are used (symbols:
publishedOperations, processed, bar.update, result.operations.length) and change
processed += result.operations.length to processed += chunk.length (or the
variable that holds the sent items), and call bar.update(processed); still push
the returned result.operations into publishedOperations but derive progress from
the attempted-send count.
- Around line 213-216: The text output concatenation accesses op.operationNames
directly which can be undefined at runtime unlike the JSON path that uses a
nullish guard; update the text branch in push.ts (the block that builds message,
referencing humanReadableOperationStatus and op.operationNames) to guard
operationNames with a nullish coalescing fallback (e.g., op.operationNames ??
[]) before checking length and joining so it matches the JSON handler's
defensive pattern and avoids runtime errors.
🧹 Nitpick comments (1)
cli/src/commands/operations/commands/push.ts (1)
237-243: Avoid index-based access when combining batched API results; use ID-based mapping instead.The code accumulates operations across multiple API batches but reconstructs the JSON output using index alignment. While the current API implementation preserves ordering, this assumes an undocumented contract. If the API behavior changes—reordering results, filtering operations, or querying in a different order—the index lookup will silently return incorrect contents without any validation.
♻️ Suggested refactor
- const returnedOperations: Record<string, OperationOutput> = {}; - for (const [ii, op] of publishedOperations.entries()) { + const contentsById = new Map(operations.map((op) => [op.id, op.contents])); + const returnedOperations: Record<string, OperationOutput> = {}; + for (const op of publishedOperations) { returnedOperations[op.id] = { hash: op.hash, - contents: operations[ii].contents, + contents: contentsById.get(op.id) ?? '', status: jsonOperationStatus(op.status), operationNames: op.operationNames ?? [], }; }
…ations-push' into ale/eng-8488-timeout-on-wgc-operations-push
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
New Features
Improvements
Bug Fixes / Validation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Checklist