Skip to content

feat: queue setup requests to prevent double refresh#334

Merged
DanielRivers merged 8 commits intorelease/2.5from
feat/setup-queue
May 9, 2025
Merged

feat: queue setup requests to prevent double refresh#334
DanielRivers merged 8 commits intorelease/2.5from
feat/setup-queue

Conversation

@DanielRivers
Copy link
Copy Markdown
Member

Explain your changes

When there are multiple client components using the browser client, this can cause multiple calls to /setup endpoint.

While middleware and provider could help and solve here, there are cases where setups are not configured like this. To help these configurations calls to setup are now queued and executed one at a time. This will be prevent double token refresh which can result in unwanted application logouts.

Checklist

🛟 If you need help, consider asking for advice over in the Kinde community.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 23, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

"""

Walkthrough

The changes involve three main updates: removal of the ESLint configuration file (eslint.config.mjs), corrections to property names in the KindeOrganizationProperties type within src/types.ts, and the introduction of a request queue manager class (RequestQueueManager) to serialize asynchronous task execution. The setup handler function in src/handlers/setup.ts was refactored to execute its logic inside this request queue, ensuring serialized processing of concurrent requests. This refactor also enhanced error handling granularity around token decoding and refreshing, while preserving the original control flow.

Changes

File(s) Change Summary
eslint.config.mjs Removed the ESLint configuration file, which included standard linting and formatting rules for the project.
src/types.ts Corrected two property names in the KindeOrganizationProperties type: fixed spelling errors in the keys.
src/utils/workQueue.ts Added RequestQueueManager class to manage asynchronous task execution in a serialized queue.
src/handlers/setup.ts Refactored setup function to run inside RequestQueueManager queue, adding serialized execution and finer error handling for token processing.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant SetupHandler
    participant RequestQueueManager
    participant KindeClient
    participant JwtDecoder

    Client->>SetupHandler: Request setup
    SetupHandler->>RequestQueueManager: enqueue setup task
    RequestQueueManager->>SetupHandler: execute queued setup task
    SetupHandler->>KindeClient: refreshTokens(session) [if expired]
    KindeClient-->>SetupHandler: refreshed tokens or error
    SetupHandler->>JwtDecoder: decode access token
    JwtDecoder-->>SetupHandler: decoded access token or error
    SetupHandler->>JwtDecoder: decode ID token
    JwtDecoder-->>SetupHandler: decoded ID token or error
    SetupHandler-->>Client: JSON response with user info or error
Loading

"""


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@DanielRivers DanielRivers changed the base branch from main to release/2.5 April 23, 2025 12:43
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
src/utils/workQueue.ts (1)

51-52: ⚠️ Potential issue

Fix file formatting to resolve pipeline failure

Ensure the file has a proper line ending to resolve the Prettier formatting issue flagged in the pipeline.

}
-52
+
🧹 Nitpick comments (3)
src/utils/workQueue.ts (3)

21-31: Fix indentation issue in enqueue method

The debug log statement has inconsistent indentation compared to surrounding code.

  async enqueue<T>(task: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
-        console.debug("enqueue: task added to queue");
+      console.debug("enqueue: task added to queue");
      this.queue.push({
        execute: task,
        resolve,
        reject,
      });
      this.processQueue();
    });
  }

44-45: Consider structured error logging

Logging the error object directly may not format well in some environments.

-      console.debug("processQueue: task execution failed", error);
+      console.debug("processQueue: task execution failed", error instanceof Error ? error.message : String(error));

1-52: Add missing queue management methods

Consider adding methods to clear the queue or handle shutdown scenarios for more complete lifecycle management.

export class RequestQueueManager {
  private static instance: RequestQueueManager;
  private isProcessing: boolean = false;
  private queue: QueueItem[] = [];

  private constructor() {}

  static getInstance(): RequestQueueManager {
    if (!RequestQueueManager.instance) {
      RequestQueueManager.instance = new RequestQueueManager();
    }
    return RequestQueueManager.instance;
  }

+  /**
+   * Clears all pending tasks from the queue
+   * @returns The number of tasks that were cleared
+   */
+  clearQueue(): number {
+    const count = this.queue.length;
+    this.queue = [];
+    console.debug(`clearQueue: cleared ${count} pending tasks`);
+    return count;
+  }
+
+  /**
+   * Check if the queue has pending tasks
+   */
+  hasPendingTasks(): boolean {
+    return this.queue.length > 0 || this.isProcessing;
+  }

  async enqueue<T>(task: () => Promise<T>): Promise<T> {
    // existing implementation
  }

  private async processQueue() {
    // existing implementation
  }
}
🧰 Tools
🪛 GitHub Actions: Build and test

[warning] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e971f4 and f4ce88d.

📒 Files selected for processing (2)
  • src/handlers/setup.ts (1 hunks)
  • src/utils/workQueue.ts (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Build and test
src/utils/workQueue.ts

[warning] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.

🔇 Additional comments (9)
src/utils/workQueue.ts (3)

1-5: Well-structured QueueItem type definition

The QueueItem type correctly includes all necessary properties for managing asynchronous task execution with proper promise handling.

🧰 Tools
🪛 GitHub Actions: Build and test

[warning] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.


7-20: Effective singleton pattern implementation

The RequestQueueManager correctly implements the singleton pattern with private constructor and static getInstance method, ensuring only one queue manager exists across the application.


33-50: Robust queue processing implementation

The processQueue method properly handles sequential execution of queued tasks with appropriate state management and error handling.

src/handlers/setup.ts (6)

11-11: Good import addition for RequestQueueManager

Properly importing the new RequestQueueManager class from the correct location.


19-21: Good implementation of queue manager instantiation

Correctly obtaining the singleton instance and preparing for queue usage.


21-155: Effective implementation of request queuing for setup handler

Successfully wrapping the entire setup logic in a queue to prevent concurrent executions, which should solve the double token refresh issue as intended in the PR objectives.


62-106: Improved token validation and error handling

The refactored code now has more granular error handling for different token decoding scenarios, with separate try-catch blocks for access tokens and ID tokens, providing clearer error messages.


107-141: Well-structured response object with complete user data

The response object includes all necessary token and user information, with well-organized nested objects for permissions, organization details, and feature flags.


21-155:

Details

❓ Verification inconclusive

Verify queue behavior with multiple concurrent requests

It would be beneficial to verify that the queue correctly handles multiple concurrent requests in a real-world scenario.


🌐 Web query:

What strategies can be used to test concurrent request handling in Next.js applications?

💡 Result:

Strategies for Testing Concurrent Request Handling in Next.js Applications

Testing concurrent request handling in Next.js is critical for ensuring your application can reliably serve multiple users at once and maintain correct behavior under load. Here are effective strategies, tools, and considerations:

1. Load and Stress Testing with Tools like k6

  • Use load testing tools such as k6 to simulate multiple users making concurrent requests to your Next.js app. These tools allow you to script scenarios where hundreds or thousands of requests hit your endpoints simultaneously, helping you identify performance bottlenecks and concurrency issues.
  • For example, one real-world test used a basic k6 script to repeatedly request a single Next.js page, measuring how many requests per second the server could handle and how response times changed under load[8]. This type of automated load testing is crucial for evaluating server-side concurrency and resource limits.

2. End-to-End (E2E) Testing with Cypress or Playwright

  • E2E testing frameworks like Cypress and Playwright can be set up to simulate multiple parallel users navigating your app, triggering API requests, and interacting with UI components at the same time[1][3]. While primarily used for user flow verification, these frameworks can be extended to test concurrent interactions.
  • For instance, you can write scripts that open multiple browser contexts or tabs, initiate simultaneous actions (like submitting forms or fetching data), and verify the app’s correctness and resiliency during these parallel activities.

3. API Route and Backend Integration Tests

  • If you use Next.js API routes, integration testing should check how your endpoints behave when multiple requests arrive concurrently. Use tools such as Jest with supertest, msw, or nock to simulate rapid or simultaneous API calls[4][5].
  • You can manually fire several asynchronous requests in your tests and assert on shared resources, locks, or expected outcomes (e.g., data consistency, correct error handling when concurrent requests cause race conditions).

4. Parallel Data Fetching Strategies

  • Next.js supports parallel data fetching patterns, which reduce waterfalls and ensure multiple data sources are loaded concurrently on the server[7]. During testing, ensure your components and pages can handle overlapping data fetches without leaking state or causing deadlocks.
  • To test this, you can create custom test cases that trigger multiple parallel fetches and verify that each renders or resolves as expected.

5. Leveraging Parallel Test Runners

  • Use Jest’s built-in parallel test execution (configured via maxWorkers) to simulate multiple test scenarios running concurrently[5]. While this improves test speed, it also helps uncover hidden state leaks or concurrency bugs in your codebase.

6. Mocking and Stubbing for Isolation

  • Tools like sinon, jest.mock, or msw let you stub and mock dependencies, so you can focus tests on the concurrency handling of your own logic without interference from real external systems[4].
  • This is especially helpful for simulating concurrent access to APIs or databases, allowing you to test edge cases and race conditions deterministically.

7. Realistic Environment Simulation

  • Always run tests against your built (production-like) Next.js server, not just the development server, as concurrency and caching behaviors can differ between environments[3].
  • Ensure your environment variables and configuration match your deployment target to faithfully reproduce concurrent conditions.

Example Load Testing Script (k6)

import http from 'k6/http'
import { check } from 'k6'

export const options = {
  stages: [
    { duration: '10s', target: 200 }, // Ramp up to 200 users
    { duration: '1m', target: 200 },
  ],
}

export default function () {
  const res = http.get('https://your-nextjs-app.com/api/some-endpoint')
  check(res, { 'status was 200': (r) => r.status === 200 })
}

This script simulates 200 concurrent virtual users and checks for correct status codes[8].


Summary Table: Tools and Approaches

Testing Need Strategy/Tool Description
Load/Concurrency Testing k6, Artillery, JMeter Simulate simultaneous real-user requests
E2E User Flows Cypress, Playwright Parallel browser sessions for user interaction
API Concurrency Jest, supertest, msw, nock Simulate concurrent API calls and check data integrity
Component Integration React Testing Library Test parallel data fetching & shared state management
Mocking/Isolation Jest.mock, sinon, msw Control dependencies for edge-case concurrency testing

Key Considerations

  • Test both client-side and server-side concurrency, as Next.js apps often have code running in both contexts.
  • Use production-like environments for the most accurate results.
  • Pay attention to state management and shared resources to avoid race conditions and data leaks.

By combining load testing tools with E2E and integration testing, and by leveraging mocking for edge scenarios, you can build a robust strategy to validate your Next.js application’s concurrent request handling capabilities[1][3][4][5][8].

Citations:


Verify queue behavior under concurrent load

Please add tests to ensure that queueManager.enqueue correctly handles multiple simultaneous requests to the setup handler without race conditions or deadlocks. Consider the following approaches:

  • Load‑testing with k6 or Artillery
    • Script hundreds of virtual users to hit /api/setup concurrently and assert correct status codes and response payloads.
  • Integration tests (Jest + supertest)
    • Fire several parallel requests to your Next.js API route and verify that queued tasks execute in order and return consistent results.
  • E2E tests with Cypress or Playwright
    • Launch multiple browser contexts/tabs triggering the setup flow at once, then assert on UI or network responses.
  • Mocking and stubbing (msw, nock, jest.mock)
    • Isolate the queue logic by faking external calls, focusing purely on enqueue/dequeue behavior under concurrent invocations.
  • Run against a production‑like build
    • Verify concurrency in an environment mirroring your deployment to catch caching or resource‑limit differences.

Incorporate these scenarios into your CI to catch any queue contention or ordering issues before release.

@DanielRivers DanielRivers merged commit 8267233 into release/2.5 May 9, 2025
2 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Jul 7, 2025
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants