Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
1 issue found across 2 files
Confidence score: 4/5
- This PR is likely safe to merge, with only a small test-quality risk rather than a production behavior issue.
- In
server/tests/integration/billing/migrations/migrate-paid.test.ts, hardcoding invoice count to 2 can miss regressions in “no new invoice created” logic if fixture/setup counts change, making the test brittle. - Severity is moderate-low (4/10) and scoped to test robustness, so merge risk is minimal but worth tightening soon by comparing against a captured pre-migration count.
- Pay close attention to
server/tests/integration/billing/migrations/migrate-paid.test.ts- the fixed expected count can hide migration regressions when baseline data changes.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="server/tests/integration/billing/migrations/migrate-paid.test.ts">
<violation number="1" location="server/tests/integration/billing/migrations/migrate-paid.test.ts:204">
P2: Hardcoding the invoice count to 2 no longer verifies “no new invoice created” and makes the test brittle if the initial invoice count changes. Capture the pre-migration invoice count and compare against it instead.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| await expectCustomerInvoiceCorrect({ | ||
| customer, | ||
| count: invoiceCountBefore, | ||
| count: 2, |
There was a problem hiding this comment.
P2: Hardcoding the invoice count to 2 no longer verifies “no new invoice created” and makes the test brittle if the initial invoice count changes. Capture the pre-migration invoice count and compare against it instead.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At server/tests/integration/billing/migrations/migrate-paid.test.ts, line 204:
<comment>Hardcoding the invoice count to 2 no longer verifies “no new invoice created” and makes the test brittle if the initial invoice count changes. Capture the pre-migration invoice count and compare against it instead.</comment>
<file context>
@@ -202,7 +201,7 @@ test.concurrent(`${chalk.yellowBright("migrate-paid-2: allocated seats with pric
await expectCustomerInvoiceCorrect({
customer,
- count: invoiceCountBefore,
+ count: 2,
});
</file context>
| if ( | ||
| consecutiveZeroMessageIntervals >= IDLE_SELF_KILL_THRESHOLD && | ||
| totalMessagesProcessed > 0 | ||
| ) { | ||
| console.log( | ||
| `${logPrefix()} Idle self-kill: 0 messages for ${consecutiveZeroMessageIntervals} intervals after processing ${totalMessagesProcessed} total. Exiting for cluster respawn.`, | ||
| ); | ||
| process.exit(0); | ||
| } |
There was a problem hiding this comment.
Idle self-kill can trigger while fire-and-forget migration jobs are still executing.
Migration jobs are dispatched with .catch() without being awaited (line 275), and messagesProcessed only increments when handleSingleMessage completes. If a migration job runs for 5+ minutes and no regular messages are processed in the interim, consecutiveZeroMessageIntervals reaches 5, causing process.exit(0) while the migration is still mid-flight.
Consider tracking active migration job count:
let activeMigrationJobs = 0;
// When dispatching a migration job (line 275):
activeMigrationJobs++;
handleSingleMessage({ sqs, message, db })
.catch((error) => {
console.error(...);
Sentry.captureException(error);
})
.finally(() => activeMigrationJobs--);
// In idle self-kill check (line 88):
if (
consecutiveZeroMessageIntervals >= IDLE_SELF_KILL_THRESHOLD &&
totalMessagesProcessed > 0 &&
activeMigrationJobs === 0
) {
// safe to exit
}Prompt To Fix With AI
This is a comment left during a code review.
Path: server/src/queue/initWorkers.ts
Line: 88-96
Comment:
Idle self-kill can trigger while fire-and-forget migration jobs are still executing.
Migration jobs are dispatched with `.catch()` without being awaited (line 275), and `messagesProcessed` only increments when `handleSingleMessage` completes. If a migration job runs for 5+ minutes and no regular messages are processed in the interim, `consecutiveZeroMessageIntervals` reaches 5, causing `process.exit(0)` while the migration is still mid-flight.
Consider tracking active migration job count:
```ts
let activeMigrationJobs = 0;
// When dispatching a migration job (line 275):
activeMigrationJobs++;
handleSingleMessage({ sqs, message, db })
.catch((error) => {
console.error(...);
Sentry.captureException(error);
})
.finally(() => activeMigrationJobs--);
// In idle self-kill check (line 88):
if (
consecutiveZeroMessageIntervals >= IDLE_SELF_KILL_THRESHOLD &&
totalMessagesProcessed > 0 &&
activeMigrationJobs === 0
) {
// safe to exit
}
```
How can I resolve this? If you propose a fix, please make it concise.| // CRITICAL: No new invoice created | ||
| await expectCustomerInvoiceCorrect({ | ||
| customer, | ||
| count: invoiceCountBefore, | ||
| count: 2, | ||
| }); |
There was a problem hiding this comment.
Hardcoded invoice count weakens the migration invariant test.
Previously, this test captured invoiceCountBefore and asserted count: invoiceCountBefore, directly verifying "the migration does not create new invoices." By hardcoding count: 2, the test now only checks "there are exactly 2 invoices at this point" — a different guarantee that couples to test setup state.
This is inconsistent with migrate-paid-1 (line 110), migrate-paid-3 (line 298), and migrate-paid-4 (line 396), which all use the dynamic invoiceCountBefore pattern. If the test setup changes or produces an additional invoice earlier, the assertion will silently pass even if the migration itself created an extra invoice.
Consider reverting to the dynamic pattern to maintain the CRITICAL invariant:
| // CRITICAL: No new invoice created | |
| await expectCustomerInvoiceCorrect({ | |
| customer, | |
| count: invoiceCountBefore, | |
| count: 2, | |
| }); | |
| // Verify initial state | |
| let customer = await autumnV1.customers.get<ApiCustomerV3>(customerId); | |
| const invoiceCountBefore = customer.invoices?.length ?? 0; | |
| // ... later ... | |
| await expectCustomerInvoiceCorrect({ | |
| customer, | |
| count: invoiceCountBefore, | |
| }); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: server/tests/integration/billing/migrations/migrate-paid.test.ts
Line: 201-205
Comment:
Hardcoded invoice count weakens the migration invariant test.
Previously, this test captured `invoiceCountBefore` and asserted `count: invoiceCountBefore`, directly verifying "the migration does not create new invoices." By hardcoding `count: 2`, the test now only checks "there are exactly 2 invoices at this point" — a different guarantee that couples to test setup state.
This is inconsistent with `migrate-paid-1` (line 110), `migrate-paid-3` (line 298), and `migrate-paid-4` (line 396), which all use the dynamic `invoiceCountBefore` pattern. If the test setup changes or produces an additional invoice earlier, the assertion will silently pass even if the migration itself created an extra invoice.
Consider reverting to the dynamic pattern to maintain the CRITICAL invariant:
```suggestion
// Verify initial state
let customer = await autumnV1.customers.get<ApiCustomerV3>(customerId);
const invoiceCountBefore = customer.invoices?.length ?? 0;
// ... later ...
await expectCustomerInvoiceCorrect({
customer,
count: invoiceCountBefore,
});
```
How can I resolve this? If you propose a fix, please make it concise.
Summary by cubic
Prevents SQS worker backlog by timing out stuck jobs, running migration jobs without blocking the poller, and recycling/respawning workers more aggressively. Also stabilizes a billing migration test with a fixed invoice count.
Written for commit 29b208b. Summary will update on new commits.
Greptile Summary
This PR addresses worker backlog issues by introducing idle self-kill, per-message timeouts, and fire-and-forget dispatch for migrations. The recycle threshold is also reduced from 500k to 50k messages.
Key changes:
MAX_MESSAGES_BEFORE_RECYCLEfrom 500,000 → 50,000 to recycle workers more aggressively and prevent memory buildup.IDLE_SELF_KILL_THRESHOLD(5 intervals): workers exit cleanly after 5 minutes of zero messages, allowing cluster respawn.MESSAGE_TIMEOUT_MS = 25_000andwithTimeouthelper: non-migration messages that stall beyond 25s raise an error and release the polling loop rather than blocking indefinitely.migrate-paid-2test to hardcodecount: 2instead of dynamically capturinginvoiceCountBefore.Issues identified:
Confidence Score: 2/5
Sequence Diagram
sequenceDiagram participant PL as Polling Loop participant Q as SQS Queue participant H as handleSingleMessage participant P as processMessage PL->>Q: ReceiveMessageCommand (up to 10 msgs) Q-->>PL: messages[] loop For each message PL->>PL: Parse job.name alt job.name === Migration PL-->>H: fire-and-forget (no await) H->>Q: DeleteMessageCommand (immediately) H->>P: processMessage (no timeout) Note over H,P: Long-running, already deleted from queue else Regular message PL->>PL: push to regularMessages[] end end PL->>H: Promise.allSettled(regularMessages) H->>P: withTimeout(processMessage, 25s) P-->>H: result H-->>PL: {id, receiptHandle} | null PL->>Q: DeleteMessageBatchCommand(toDelete) Note over PL: Every 60s: logStatsAndCheckZeroMessages() alt messagesProcessed == 0 for 5 intervals AND totalProcessed > 0 PL->>PL: process.exit(0) [⚠ RACE: may kill while migrations running] end alt totalMessagesProcessed >= 50,000 PL->>PL: process.exit(0) [recycle] endLast reviewed commit: 29b208b
Context used:
dashboard- CLAUDE.md (source)dashboard- When generating the key changes section of the summary, please tag each change with one or many of t... (source)