Skip to content

feat: buffer data plane events conditionally#2744

Draft
saikumarrs wants to merge 9 commits intodevelopfrom
feature/sdk-2862-buffer-data-plane-events-until-ready
Draft

feat: buffer data plane events conditionally#2744
saikumarrs wants to merge 9 commits intodevelopfrom
feature/sdk-2862-buffer-data-plane-events-until-ready

Conversation

@saikumarrs
Copy link
Copy Markdown
Member

@saikumarrs saikumarrs commented Feb 5, 2026

PR Description

Buffering the data plane events conditionally per the load API options.

Linear task (optional)

https://linear.app/rudderstack/issue/SDK-2862/buffer-data-plane-events-until-ready

Cross Browser Tests

Please confirm you have tested for the following browsers:

  • Chrome
  • Firefox
  • IE11

Sanity Suite

  • All sanity suite test cases pass locally

Security

  • The code changed/added as part of this pull request won't create any security issues with how the software is being used.

Summary by CodeRabbit

  • New Features

    • Added event buffering for data-plane events with automatic replay when buffering is lifted or conditions are met.
  • Refactor

    • Centralized destination activation to reduce duplicated logic and improve reliability.
    • Adjusted GA4 integration loading behavior to better detect readiness and tolerate short delays before becoming active.

@saikumarrs saikumarrs self-assigned this Feb 5, 2026
Copilot AI review requested due to automatic review settings February 5, 2026 14:19
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactors destination activation in Analytics, adds event buffering and replay logic to EventRepository, adjusts GA4 integration readiness checks, and modifies CI/CD workflows (deploy-beta gating disabled; deploy-npm publish command simplified).

Changes

Cohort / File(s) Summary
Core Analytics Refactor
packages/analytics-js/src/components/core/Analytics.ts
Added private setActiveDestinations() and replaced direct calls to nativeDestinations.setActiveDestinations with centralized helper invocations (onMounted, loadDestinations). No public API changes.
Event Buffering & Replay
packages/analytics-js/src/components/eventRepository/EventRepository.ts
Introduced eventsBuffer: RudderEvent[] and isEventBufferingActive: boolean; added startDpEventsQueue() to initialize dataplane queue and replay buffered events; changed enqueue(), init(), and resume flow to buffer events while buffering is active and to flush/replay when started.
GA4 Integration
packages/analytics-js-integrations/src/integrations/GA4/browser.js
Added gtagLoadedAt field, changed loadScript to support overriding client/session IDs via analytics getters, and updated isLoaded() to wait for either a sessionNumber or a 2s stabilization window after gtag readiness (references getLoadedAt/timing logic introduced).
CI/CD Workflows
.github/workflows/deploy-beta.yml, .github/workflows/deploy-npm.yml
deploy-beta.yml: PR eligibility and mergeability checks wrapped in if (false), effectively disabling draft/mergeable-state gating while still failing if no PR exists. deploy-npm.yml: publish step simplified to use fixed project spec --projects=@rudderstack/analytics-js (removed dynamic base/head publish range).

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client as Client
participant Repo as EventRepository
participant Buffer as EventsBuffer
participant DP as DataPlaneQueue
participant Dest as Destinations
Client->>Repo: emit event
alt buffering active
Repo->>Buffer: push event
else buffering inactive
Repo->>DP: enqueue to dataplane
Repo->>Dest: enqueue to destinations
end
Repo->>Repo: startDpEventsQueue() (on start)
Repo->>Buffer: replay buffered events
Buffer->>DP: enqueue replayed events
Buffer->>Dest: enqueue replayed events

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: buffer data plane events conditionally' accurately reflects the main objective of the changeset—implementing conditional buffering for data plane events based on load API options.
Description check ✅ Passed The PR description includes the required sections (PR Description, Linear task, Cross Browser Tests, Sanity Suite, Security) with substantive content in the PR Description and Linear task fields, though all test/verification checkboxes remain unchecked.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements conditional buffering of data plane events based on the bufferDataPlaneEventsUntilReady load API option. When this option is enabled and hybrid mode destinations exist, events are buffered until either the client destinations are ready or a configurable timeout expires.

Changes:

  • Adds event buffering logic to the EventRepository class with new properties eventsBuffer and isEventBufferingActive
  • Extracts data plane queue initialization into a new startDpEventsQueue() method that manages buffering lifecycle
  • Modifies enqueue() to buffer events conditionally instead of immediately enqueueing them
  • Calls setActiveDestinations() earlier in the Analytics initialization flow to ensure destination information is available before event repository initialization

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.

File Description
packages/analytics-js/src/components/eventRepository/EventRepository.ts Implements event buffering mechanism with conditional queue startup, replay logic, and buffer management
packages/analytics-js/src/components/core/Analytics.ts Extracts setActiveDestinations() method and calls it during onPluginsReady() to ensure destinations are set before event processing begins
Comments suppressed due to low confidence (1)

packages/analytics-js/src/components/eventRepository/EventRepository.ts:198

  • When event buffering is active, the callback is invoked immediately on line 198, even though the event is only buffered and not yet enqueued to the data plane or destinations queues. This means callbacks are executed before events are actually processed and sent. From a user perspective, the callback should indicate that the event has been successfully enqueued for processing, but in this case it fires before the event is truly enqueued. Consider deferring callback invocation until events are actually enqueued (either immediately if buffering is inactive, or during replay if buffering is active).
    // Invoke the callback if it exists
    const apiName = `${event.type.charAt(0).toUpperCase()}${event.type.slice(1)}${API_SUFFIX}`;
    safelyInvokeCallback(callback, [dpQEvent], apiName, this.logger);

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +173 to +194
if (this.isEventBufferingActive) {
this.eventsBuffer.push(dpQEvent);
} else {
this.pluginsManager.invokeSingle(
`${DATA_PLANE_QUEUE_EXT_POINT_PREFIX}.enqueue`,
state,
this.dataplaneEventsQueue,
dpQEvent,
this.errorHandler,
this.logger,
);

const dQEvent = clone(event);
this.pluginsManager.invokeSingle(
`${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`,
state,
this.destinationsEventsQueue,
dQEvent,
this.errorHandler,
this.logger,
);
}
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When event buffering is active, the destinations events queue is not being populated. This means that when buffering ends and events are replayed via this.enqueue(event), the destinations will receive events, but they will miss any events that were buffered. The current implementation only buffers events in eventsBuffer for the data plane queue, but doesn't handle the destinations queue during buffering. This could cause inconsistent behavior where device mode destinations don't receive all events that the data plane does.

Copilot uses AI. Check for mistakes.
Comment on lines +134 to 139
let timeoutId: number;
// Start the queue when no event buffering is required
// or when buffering is required and the client destinations are ready
effect(() => {
if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
(globalThis as typeof window).clearTimeout(timeoutId);
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timeoutId variable is declared but might be accessed before it's assigned. If the effect callback on line 137 executes synchronously (when shouldBufferEvents is false), it will try to clear timeoutId on line 139 before it has been assigned any value on line 147. While this won't cause an error (clearTimeout with undefined is safe), it's better to initialize timeoutId to a default value or restructure the code to make the intent clearer.

Copilot uses AI. Check for mistakes.
Comment on lines +121 to +125
const startDpQueueAndReplayEvents = () => {
this.isEventBufferingActive = false;
this.eventsBuffer.forEach(event => {
this.enqueue(event);
});
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The buffer replay logic calls this.enqueue(event) for each buffered event. However, the events in the buffer are already the result of calling getFinalEvent(event, state) on line 171 when they were first buffered. When replayed, enqueue() will call getFinalEvent() again on these already-processed events on line 171. This results in unnecessary double cloning and processing of events. Additionally, since the original event parameter is not available during replay (only dpQEvent is stored), the destinations queue on lines 185-193 will receive the transformed event instead of the original event. Consider storing the original event in the buffer alongside the transformed one, or creating a separate replay method that directly enqueues to the queues without going through the full enqueue() flow.

Copilot uses AI. Check for mistakes.
Comment on lines +137 to 142
effect(() => {
if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
(globalThis as typeof window).clearTimeout(timeoutId);
this.dataplaneEventsQueue?.start();
startDpQueueAndReplayEvents();
}
});
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The effect() function on line 137 returns a dispose function that should be stored and called when appropriate to prevent memory leaks. Without proper cleanup, if startDpEventsQueue() is called multiple times (e.g., through the resume() method), multiple effect subscriptions will accumulate, all watching the same state changes and potentially causing the callback to execute multiple times. Consider storing the dispose function and calling it before the function exits or when the component is being cleaned up.

Copilot uses AI. Check for mistakes.
Comment on lines 147 to 149
timeoutId = (globalThis as typeof window).setTimeout(() => {
if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}
startDpQueueAndReplayEvents();
}, state.loadOptions.value.dataPlaneEventsBufferTimeout);
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If state.loadOptions.value.dataPlaneEventsBufferTimeout is undefined, null, or 0, the timeout will fire immediately or use these invalid values. While the load options normalization should handle this, it's safer to add a guard or use a default value here to ensure the timeout is always a valid positive number. This prevents unexpected behavior if the normalization logic changes or fails.

Copilot uses AI. Check for mistakes.
this.destinationsEventsQueue?.clear();
}

this.startDpEventsQueue();
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling startDpEventsQueue() from the resume() method will create multiple active effect() subscriptions and timeouts each time resume() is called. This can lead to:

  1. Multiple concurrent effect subscriptions watching the same state changes
  2. Multiple active timeouts that will all fire
  3. The startDpQueueAndReplayEvents() function being called multiple times

This will cause the buffered events to be replayed multiple times and the data plane queue to be started multiple times. Consider adding a guard to prevent multiple calls to startDpEventsQueue(), or ensure that effects and timeouts from previous calls are properly cleaned up.

Suggested change
this.startDpEventsQueue();
if (!this.dataplaneEventsQueue) {
this.startDpEventsQueue();
}

Copilot uses AI. Check for mistakes.
Comment on lines +137 to 142
effect(() => {
if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
(globalThis as typeof window).clearTimeout(timeoutId);
this.dataplaneEventsQueue?.start();
startDpQueueAndReplayEvents();
}
});
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The effect subscription created on line 137 will execute immediately when shouldBufferEvents is false (when there are no hybrid destinations or buffering is disabled). This will call startDpQueueAndReplayEvents() immediately, which attempts to replay the buffer and start the queue. However, since shouldBufferEvents is evaluated once at the time startDpEventsQueue() is called, if activeDestinations changes later (e.g., after destinations are loaded), the effect won't react to those changes. This could lead to inconsistent behavior if destinations are loaded after the initial check.

Copilot uses AI. Check for mistakes.
Comment on lines +154 to +157
if (
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
state.consents.postConsent.value.discardPreConsentEvents
) {
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition on line 155 checks if scheduleTimeoutActive !== true, but then only clears queues if discardPreConsentEvents is true. This means the data plane queue is started by startDpEventsQueue() on line 162 regardless of whether events should be discarded. If discardPreConsentEvents is false, the queues are not cleared before starting, which is correct. However, if discardPreConsentEvents is true but scheduleTimeoutActive is also true, the queues won't be cleared, which could lead to pre-consent events being sent when they should be discarded. Consider restructuring the logic to make the intent clearer, or ensure that the queue clearing happens independently of the scheduleTimeoutActive check.

Suggested change
if (
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
state.consents.postConsent.value.discardPreConsentEvents
) {
if (state.consents.postConsent.value.discardPreConsentEvents) {

Copilot uses AI. Check for mistakes.
Comment on lines +173 to +174
if (this.isEventBufferingActive) {
this.eventsBuffer.push(dpQEvent);
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The event buffering and replay logic lacks test coverage. The existing tests verify that the data plane queue starts correctly based on the buffering configuration, but they don't test:

  1. Whether events are actually buffered when isEventBufferingActive is true
  2. Whether buffered events are replayed correctly when buffering ends
  3. Whether the destinations queue receives the correct events during replay
  4. Edge cases like multiple calls to resume() or startDpEventsQueue()

Consider adding tests to verify the complete buffering and replay flow, including edge cases.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`:
- Around line 154-163: When discardPreConsentEvents is true we clear
dataplaneEventsQueue and destinationsEventsQueue but forget to clear the
in-memory eventsBuffer, so pre-consent events still replay via
startDpEventsQueue; update the conditional branch that checks
state.consents.postConsent.value.discardPreConsentEvents inside EventRepository
to also clear this.eventsBuffer (and any related buffer structures) before
calling this.startDpEventsQueue(), referencing the eventsBuffer property and
startDpEventsQueue() method so all pre-consent data is cleared consistently.
- Around line 108-151: startDpEventsQueue currently enables buffering and arms
the timeout regardless of clientDestinationsReady, which can re-enable buffering
after immediate readiness; modify the logic so you only set
this.isEventBufferingActive = true and assign timeoutId (and call setTimeout)
when shouldBufferEvents is true AND
state.nativeDestinations.clientDestinationsReady.value is false, leaving
startDpQueueAndReplayEvents and the effect that clears the timeout to run as-is;
in other words, move/guard the block that sets isEventBufferingActive and starts
the timeout behind a check of
state.nativeDestinations.clientDestinationsReady.value inside startDpEventsQueue
so buffering is not re-enabled when destinations are already ready (refer to
startDpEventsQueue, startDpQueueAndReplayEvents, shouldBufferEvents, timeoutId,
isEventBufferingActive).
- Around line 173-193: The buffered path pushes dpQEvent (finalized via
getFinalEvent/merged integrations) into eventsBuffer while the non-buffered path
sends a cloned original event to destinations, causing inconsistent shapes; fix
by making destinations receive the same finalized event: either push the
original event into eventsBuffer and call getFinalEvent/merge integrations at
replay time, or in the non-buffered branch replace clone(event) with the same
finalized dpQEvent (or call getFinalEvent into dQEvent) before invoking
`${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue` so that
pluginsManager.invokeSingle always receives the merged/final event for
destinations; update references around isEventBufferingActive, eventsBuffer,
dpQEvent, clone(event), and the `${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`
invocation accordingly.

Comment on lines +108 to 151
private startDpEventsQueue() {
const bufferEventsUntilReady = state.loadOptions.value
.bufferDataPlaneEventsUntilReady as boolean;

const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
(dest: Destination) => isHybridModeDestination(dest),
);
const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
(dest: Destination) => isHybridModeDestination(dest),
);
const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist;

// Start the data plane events queue and replay the events from the buffer
// This function is called when the client destinations are ready
// or when the timeout expires
// or when no buffering is required
const startDpQueueAndReplayEvents = () => {
this.isEventBufferingActive = false;
this.eventsBuffer.forEach(event => {
this.enqueue(event);
});

if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}

this.eventsBuffer = [];
};

if (
(hybridDestExist === false || shouldBufferDpEvents === false) &&
!bufferEventsBeforeConsent &&
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true
) {
let timeoutId: number;
// Start the queue when no event buffering is required
// or when buffering is required and the client destinations are ready
effect(() => {
if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
(globalThis as typeof window).clearTimeout(timeoutId);
this.dataplaneEventsQueue?.start();
startDpQueueAndReplayEvents();
}
});

// Force start the data plane events queue processing after a timeout
if (state.loadOptions.value.bufferDataPlaneEventsUntilReady === true) {
if (shouldBufferEvents) {
this.isEventBufferingActive = true;
timeoutId = (globalThis as typeof window).setTimeout(() => {
if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}
startDpQueueAndReplayEvents();
}, state.loadOptions.value.dataPlaneEventsBufferTimeout);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent buffering from being re-enabled after immediate readiness.
If startDpEventsQueue runs when clientDestinationsReady is already true, the effect (Line 137-Line 141) disables buffering, but the later if (shouldBufferEvents) block (Line 145-Line 149) sets it back to true and arms the timeout. That can unnecessarily delay events for the full timeout. Reorder the flag/timeout setup and only arm the timeout when destinations aren’t ready.

🛠️ Suggested fix
   private startDpEventsQueue() {
     const bufferEventsUntilReady = state.loadOptions.value
       .bufferDataPlaneEventsUntilReady as boolean;

     const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
       (dest: Destination) => isHybridModeDestination(dest),
     );
     const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist;

     // Start the data plane events queue and replay the events from the buffer
     // This function is called when the client destinations are ready
     // or when the timeout expires
     // or when no buffering is required
     const startDpQueueAndReplayEvents = () => {
       this.isEventBufferingActive = false;
       this.eventsBuffer.forEach(event => {
         this.enqueue(event);
       });

       if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
         this.dataplaneEventsQueue?.start();
       }

       this.eventsBuffer = [];
     };

-    let timeoutId: number;
-    // Start the queue when no event buffering is required
-    // or when buffering is required and the client destinations are ready
-    effect(() => {
-      if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
-        (globalThis as typeof window).clearTimeout(timeoutId);
-        startDpQueueAndReplayEvents();
-      }
-    });
-
-    // Force start the data plane events queue processing after a timeout
-    if (shouldBufferEvents) {
-      this.isEventBufferingActive = true;
-      timeoutId = (globalThis as typeof window).setTimeout(() => {
-        startDpQueueAndReplayEvents();
-      }, state.loadOptions.value.dataPlaneEventsBufferTimeout);
-    }
+    this.isEventBufferingActive = shouldBufferEvents;
+    let timeoutId: number | undefined;
+
+    // Force start the data plane events queue processing after a timeout
+    if (shouldBufferEvents && !state.nativeDestinations.clientDestinationsReady.value) {
+      timeoutId = (globalThis as typeof window).setTimeout(() => {
+        startDpQueueAndReplayEvents();
+      }, state.loadOptions.value.dataPlaneEventsBufferTimeout);
+    }
+
+    // Start the queue when no event buffering is required
+    // or when buffering is required and the client destinations are ready
+    effect(() => {
+      if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
+        if (timeoutId !== undefined) {
+          (globalThis as typeof window).clearTimeout(timeoutId);
+        }
+        startDpQueueAndReplayEvents();
+      }
+    });
   }
🤖 Prompt for AI Agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`
around lines 108 - 151, startDpEventsQueue currently enables buffering and arms
the timeout regardless of clientDestinationsReady, which can re-enable buffering
after immediate readiness; modify the logic so you only set
this.isEventBufferingActive = true and assign timeoutId (and call setTimeout)
when shouldBufferEvents is true AND
state.nativeDestinations.clientDestinationsReady.value is false, leaving
startDpQueueAndReplayEvents and the effect that clears the timeout to run as-is;
in other words, move/guard the block that sets isEventBufferingActive and starts
the timeout behind a check of
state.nativeDestinations.clientDestinationsReady.value inside startDpEventsQueue
so buffering is not re-enabled when destinations are already ready (refer to
startDpEventsQueue, startDpQueueAndReplayEvents, shouldBufferEvents, timeoutId,
isEventBufferingActive).

Comment on lines +154 to 163
if (
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
state.consents.postConsent.value.discardPreConsentEvents
) {
this.dataplaneEventsQueue?.clear();
this.destinationsEventsQueue?.clear();
}

this.startDpEventsQueue();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Discarded pre-consent events still replay from eventsBuffer.
When discardPreConsentEvents is true (Line 156-Line 159), queues are cleared but eventsBuffer is not, so buffered pre-consent events will still replay in startDpEventsQueue. Clear the buffer in the same branch.

🛠️ Suggested fix
     if (
       this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
       state.consents.postConsent.value.discardPreConsentEvents
     ) {
       this.dataplaneEventsQueue?.clear();
       this.destinationsEventsQueue?.clear();
+      this.eventsBuffer = [];
     }

     this.startDpEventsQueue();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
state.consents.postConsent.value.discardPreConsentEvents
) {
this.dataplaneEventsQueue?.clear();
this.destinationsEventsQueue?.clear();
}
this.startDpEventsQueue();
}
if (
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
state.consents.postConsent.value.discardPreConsentEvents
) {
this.dataplaneEventsQueue?.clear();
this.destinationsEventsQueue?.clear();
this.eventsBuffer = [];
}
this.startDpEventsQueue();
}
🤖 Prompt for AI Agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`
around lines 154 - 163, When discardPreConsentEvents is true we clear
dataplaneEventsQueue and destinationsEventsQueue but forget to clear the
in-memory eventsBuffer, so pre-consent events still replay via
startDpEventsQueue; update the conditional branch that checks
state.consents.postConsent.value.discardPreConsentEvents inside EventRepository
to also clear this.eventsBuffer (and any related buffer structures) before
calling this.startDpEventsQueue(), referencing the eventsBuffer property and
startDpEventsQueue() method so all pre-consent data is cleared consistently.

Comment on lines +173 to +193
if (this.isEventBufferingActive) {
this.eventsBuffer.push(dpQEvent);
} else {
this.pluginsManager.invokeSingle(
`${DATA_PLANE_QUEUE_EXT_POINT_PREFIX}.enqueue`,
state,
this.dataplaneEventsQueue,
dpQEvent,
this.errorHandler,
this.logger,
);

const dQEvent = clone(event);
this.pluginsManager.invokeSingle(
`${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`,
state,
this.destinationsEventsQueue,
dQEvent,
this.errorHandler,
this.logger,
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's run the provided script to inspect getFinalEvent behavior
rg -n "getFinalEvent" packages/analytics-js/src/components/eventRepository -g '*.{ts,tsx}' -C3

Repository: rudderlabs/rudder-sdk-js

Length of output: 2891


🏁 Script executed:

#!/bin/bash
# Let's examine the EventRepository.ts file structure around the lines mentioned
cd packages/analytics-js && head -200 src/components/eventRepository/EventRepository.ts | tail -50

Repository: rudderlabs/rudder-sdk-js

Length of output: 1400


🏁 Script executed:

#!/bin/bash
# Search for where events are replayed from buffer to understand the full flow
rg -n "eventsBuffer" packages/analytics-js/src/components/eventRepository -g '*.{ts,tsx}' -A2 -B2

Repository: rudderlabs/rudder-sdk-js

Length of output: 2580


🏁 Script executed:

#!/bin/bash
# Look for the enqueue method implementations and how they handle events
rg -n "enqueue" packages/analytics-js/src/components/eventRepository -g '*.{ts,tsx}' -A3

Repository: rudderlabs/rudder-sdk-js

Length of output: 2148


🏁 Script executed:

#!/bin/bash
# Get the complete getFinalEvent function implementation
sed -n '31,45p' packages/analytics-js/src/components/eventRepository/utils.ts

Repository: rudderlabs/rudder-sdk-js

Length of output: 739


🏁 Script executed:

#!/bin/bash
# Look at the getOverriddenIntegrationOptions function to understand what getFinalEvent does
rg -n "getOverriddenIntegrationOptions" packages/analytics-js/src/components/eventRepository/utils.ts -A10

Repository: rudderlabs/rudder-sdk-js

Length of output: 1022


🏁 Script executed:

#!/bin/bash
# Check if there are any tests that verify the buffering behavior
find packages/analytics-js -name '*.test.ts' -o -name '*.spec.ts' | xargs rg -l "eventsBuffer\|EventRepository" 2>/dev/null

Repository: rudderlabs/rudder-sdk-js

Length of output: 52


Event shape inconsistency between buffered and non-buffered paths.

When buffering is active, dpQEvent (finalized with merged integrations config) is stored and replayed via enqueue, which sends the finalized event to destinations. In the non-buffered path, destinations receive the original event via clone(event) without the integrations config merge applied. This causes destinations to receive different event shapes depending on whether buffering was used. Consider either buffering the original event and applying getFinalEvent only during replay, or ensuring destinations consistently receive the same event shape in both paths.

🤖 Prompt for AI Agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`
around lines 173 - 193, The buffered path pushes dpQEvent (finalized via
getFinalEvent/merged integrations) into eventsBuffer while the non-buffered path
sends a cloned original event to destinations, causing inconsistent shapes; fix
by making destinations receive the same finalized event: either push the
original event into eventsBuffer and call getFinalEvent/merge integrations at
replay time, or in the non-buffered branch replace clone(event) with the same
finalized dpQEvent (or call getFinalEvent into dQEvent) before invoking
`${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue` so that
pluginsManager.invokeSingle always receives the merged/final event for
destinations; update references around isEventBufferingActive, eventsBuffer,
dpQEvent, clone(event), and the `${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`
invocation accordingly.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Feb 5, 2026

size-limit report 📦

Name Size Delta Limit check
Plugins Module Federation Mapping - Legacy - CDN 330 B 0 B (0% 🟢) 512 B (✅)
Plugins Module Federation Mapping - Modern - CDN 330 B 0 B (0% 🟢) 512 B (✅)
Plugins - Legacy - CDN 13.6 KB 0 B (0% 🟢) 15 KB (✅)
Plugins - Modern - CDN 5.52 KB 0 B (0% 🟢) 6 KB (✅)
Core (v1.1) - NPM (ESM) 29.79 KB 0 B (0% 🟢) 32 KB (✅)
Core (v1.1) - NPM (CJS) 29.92 KB 0 B (0% 🟢) 32 KB (✅)
Core (v1.1) - NPM (UMD) 29.77 KB 0 B (0% 🟢) 32 KB (✅)
Core (Content Script - v1.1) - NPM (ESM) 29.34 KB 0 B (0% 🟢) 30.5 KB (✅)
Core (Content Script - v1.1) - NPM (CJS) 29.5 KB 0 B (0% 🟢) 30.5 KB (✅)
Core (Content Script - v1.1) - NPM (UMD) 29.34 KB 0 B (0% 🟢) 30.5 KB (✅)
Core (v1.1) - Legacy - CDN 31.24 KB 0 B (0% 🟢) 32.5 KB (✅)
Core (v1.1) - Modern - CDN 29.65 KB 0 B (0% 🟢) 32 KB (✅)
Load Snippet 780 B 0 B (0% 🟢) 1 KB (✅)
Core - Legacy - NPM (ESM) 48.71 KB -10 B (-0.03% 🟢) 50 KB (✅)
Core - Legacy - NPM (CJS) 48.9 KB 3 B (+0.01% 🔺) 50 KB (✅)
Core - Legacy - NPM (UMD) 48.76 KB 45 B (+0.1% 🔺) 50 KB (✅)
Core - Legacy - CDN 48.82 KB 4 B (+0.01% 🔺) 50 KB (✅)
Core - Modern - NPM (ESM) 28.19 KB 96 B (+0.34% 🔺) 28.5 KB (✅)
Core - Modern - NPM (CJS) 28.4 KB 78 B (+0.27% 🔺) 28.5 KB (✅)
Core - Modern - NPM (UMD) 28.2 KB 83 B (+0.29% 🔺) 28.5 KB (✅)
Core - Modern - CDN 28.44 KB 80 B (+0.28% 🔺) 28.5 KB (✅)
Core (Bundled) - Legacy - NPM (ESM) 48.77 KB 76 B (+0.16% 🔺) 50 KB (✅)
Core (Bundled) - Legacy - NPM (CJS) 48.99 KB 176 B (+0.36% 🔺) 50 KB (✅)
Core (Bundled) - Legacy - NPM (UMD) 48.72 KB 34 B (+0.07% 🔺) 50 KB (✅)
Core (Bundled) - Modern - NPM (ESM) 41.36 KB 29 B (+0.07% 🔺) 41.5 KB (✅)
Core (Bundled) - Modern - NPM (CJS) 41.58 KB 60 B (+0.15% 🔺) 42 KB (✅)
Core (Bundled) - Modern - NPM (UMD) 41.39 KB 31 B (+0.08% 🔺) 41.5 KB (✅)
Core (Content Script) - Legacy - NPM (ESM) 48.73 KB 95 B (+0.2% 🔺) 50 KB (✅)
Core (Content Script) - Legacy - NPM (CJS) 48.85 KB 39 B (+0.08% 🔺) 50 KB (✅)
Core (Content Script) - Legacy - NPM (UMD) 48.71 KB 40 B (+0.09% 🔺) 50 KB (✅)
Core (Content Script) - Modern - NPM (ESM) 41.37 KB 139 B (+0.33% 🔺) 41.5 KB (✅)
Core (Content Script) - Modern - NPM (CJS) 41.49 KB 33 B (+0.08% 🔺) 41.5 KB (✅)
Core (Content Script) - Modern - NPM (UMD) 41.37 KB 37 B (+0.09% 🔺) 41.5 KB (✅)
Core (Lite) - Legacy - NPM (ESM) 37.12 KB 105 B (+0.28% 🔺) 39 KB (✅)
Core (Lite) - Legacy - NPM (CJS) 37.31 KB 50 B (+0.14% 🔺) 39 KB (✅)
Core (Lite) - Legacy - NPM (UMD) 37.13 KB 129 B (+0.35% 🔺) 39 KB (✅)
Core (Lite) - Modern - NPM (ESM) 32.5 KB 72 B (+0.22% 🔺) 34 KB (✅)
Core (Lite) - Modern - NPM (CJS) 32.7 KB -2 B (-0.01% 🟢) 34 KB (✅)
Core (Lite) - Modern - NPM (UMD) 32.56 KB 74 B (+0.23% 🔺) 34 KB (✅)
All Integrations - Legacy - CDN 95.39 KB 63 B (+0.07% 🔺) 97 KB (✅)
All Integrations - Modern - CDN 87.99 KB 47 B (+0.06% 🔺) 93.5 KB (✅)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@scripts/make-package-json-publish-ready.sh`:
- Around line 12-19: The nx lookup currently silences stderr and ignores exit
codes causing empty AFFECTED_PROJECTS to be treated as "no changes"; run the
command without redirecting stderr (or capture both output and status) and check
its exit status immediately after invoking npx nx show projects --affected
--base="$BASE_REF" --head="$HEAD_REF"; if the command fails (non‑zero exit),
write an error message and exit with a non‑zero status instead of continuing, so
AFFECTED_PROJECTS is only used when the nx command actually succeeded.
🧹 Nitpick comments (1)
.github/workflows/deploy-npm.yml (1)

166-168: Quote version arguments when invoking the script.

It’s safer to quote these refs to avoid accidental word-splitting if the values are empty or contain unexpected whitespace.

🛠️ Suggested fix
-          ./scripts/make-package-json-publish-ready.sh ${{ env.last_monorepo_version }} ${{ env.current_monorepo_version }}
+          ./scripts/make-package-json-publish-ready.sh "${{ env.last_monorepo_version }}" "${{ env.current_monorepo_version }}"

Comment on lines +12 to +19
# Get affected projects using Nx
echo "Getting affected packages (base: $BASE_REF, head: $HEAD_REF)..."
AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF" 2>/dev/null)

if [ -z "$AFFECTED_PROJECTS" ]; then
echo "No affected packages found."
exit 0
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n scripts/make-package-json-publish-ready.sh

Repository: rudderlabs/rudder-sdk-js

Length of output: 3208


🏁 Script executed:

rg -l "npx nx show projects" --type sh

Repository: rudderlabs/rudder-sdk-js

Length of output: 110


🏁 Script executed:

rg "AFFECTED_PROJECTS" scripts/ --type sh -B 2 -A 2

Repository: rudderlabs/rudder-sdk-js

Length of output: 1320


Fail fast when Nx affected lookup errors out.

npx nx show projects --affected stderr is suppressed and its exit code is ignored. If Nx fails (bad refs, missing project graph), AFFECTED_PROJECTS becomes empty and the script exits with status 0, skipping the cleanup loop and risking publication of uncleaned package.json files. Capture the exit status and abort on failure.

🛠️ Suggested fix
-AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF" 2>/dev/null)
+AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF")
+NX_STATUS=$?
+if [ $NX_STATUS -ne 0 ]; then
+  echo "Nx affected projects lookup failed (base: $BASE_REF, head: $HEAD_REF)."
+  exit 1
+fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Get affected projects using Nx
echo "Getting affected packages (base: $BASE_REF, head: $HEAD_REF)..."
AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF" 2>/dev/null)
if [ -z "$AFFECTED_PROJECTS" ]; then
echo "No affected packages found."
exit 0
fi
# Get affected projects using Nx
echo "Getting affected packages (base: $BASE_REF, head: $HEAD_REF)..."
AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF")
NX_STATUS=$?
if [ $NX_STATUS -ne 0 ]; then
echo "Nx affected projects lookup failed (base: $BASE_REF, head: $HEAD_REF)."
exit 1
fi
if [ -z "$AFFECTED_PROJECTS" ]; then
echo "No affected packages found."
exit 0
fi
🤖 Prompt for AI Agents
In `@scripts/make-package-json-publish-ready.sh` around lines 12 - 19, The nx
lookup currently silences stderr and ignores exit codes causing empty
AFFECTED_PROJECTS to be treated as "no changes"; run the command without
redirecting stderr (or capture both output and status) and check its exit status
immediately after invoking npx nx show projects --affected --base="$BASE_REF"
--head="$HEAD_REF"; if the command fails (non‑zero exit), write an error message
and exit with a non‑zero status instead of continuing, so AFFECTED_PROJECTS is
only used when the nx command actually succeeded.

Copilot AI review requested due to automatic review settings February 5, 2026 15:27
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @.github/workflows/deploy-npm.yml:
- Around line 166-186: The AFFECTED_PROJECTS assignment and PROJECTS_CSV
conversion are incorrect: capture newline-separated output from npx nx show
projects --affected into AFFECTED_PROJECTS and fail the job if the nx command
returns a non-zero status or produces no projects; then convert newlines (not
spaces) into commas for PROJECTS_CSV (trim any trailing comma) before passing to
npx nx release publish; update the block that sets AFFECTED_PROJECTS, checks it,
computes PROJECTS_CSV, and handles TAG_OPTION so the workflow errors out on nx
failures and supplies a valid comma-separated --projects value.

In `@scripts/make-package-json-publish-ready.sh`:
- Around line 3-12: The script's no-argument check currently exits with status 0
which hides failures; in the if [ $# -eq 0 ] block in
make-package-json-publish-ready.sh replace exit 0 with a non-zero exit (e.g.,
exit 1) so the script fails fast in CI when no projects are provided and ensure
the echo messages remain to show usage context.
- Around line 23-57: The script currently continues when a mapped package
directory or package.json is missing which risks silent skips; modify the
for-loop handling around package_dir/package to fail fast: after computing
package="$PACKAGES_DIR/$package_dir" check if [ -d "$package" ] and if not echo
an explicit error including "$package" and exit 1; when
PACKAGE_JSON="$package/package.json" is set, if [ ! -f "$PACKAGE_JSON" ] then
echo an explicit error including "$PACKAGE_JSON" and exit 1; also ensure the jq
commands for cleaning (used in the rudder-sdk-js branch and the standard branch)
check their exit status and abort with a non-zero exit if jq fails so the script
never continues on error.

Comment on lines +166 to +186
# Get affected projects between versions
AFFECTED_PROJECTS=$(npx nx show projects --affected --base=${{ env.last_monorepo_version }} --head=${{ env.current_monorepo_version }})

# Remove unnecessary fields from package.json before publishing
./scripts/make-package-json-publish-ready.sh
if [ -n "$AFFECTED_PROJECTS" ]; then
echo "Cleaning package.json for affected projects..."
./scripts/make-package-json-publish-ready.sh $AFFECTED_PROJECTS

TAG_OPTION=""
if [[ "${{ inputs.environment }}" == "beta" ]]; then
TAG_OPTION="--tag=beta"
fi
# Convert space-separated projects to comma-separated for Nx
PROJECTS_CSV=$(echo $AFFECTED_PROJECTS | tr ' ' ',')
echo "Projects to publish: $PROJECTS_CSV"

TAG_OPTION=""
if [[ "${{ inputs.environment }}" == "beta" ]]; then
TAG_OPTION="--tag=beta"
fi

npx nx release publish --verbose --base=${{ env.last_monorepo_version }} --head=${{ env.current_monorepo_version }} $TAG_OPTION
npx nx release publish --verbose --projects=$PROJECTS_CSV $TAG_OPTION
else
echo "No affected projects found, skipping publish"
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, check the repo structure and Nx configuration
ls -la | head -20

Repository: rudderlabs/rudder-sdk-js

Length of output: 1246


🏁 Script executed:

# Check if Nx is available and examine nx.json
fd -t f "nx.json" -o "package.json" | head -5

Repository: rudderlabs/rudder-sdk-js

Length of output: 211


🏁 Script executed:

# Look at the actual workflow file to understand the context better
cat -n .github/workflows/deploy-npm.yml | head -100

Repository: rudderlabs/rudder-sdk-js

Length of output: 4363


🏁 Script executed:

# Read the section around lines 166-186
cat -n .github/workflows/deploy-npm.yml | sed -n '160,195p'

Repository: rudderlabs/rudder-sdk-js

Length of output: 1720


🏁 Script executed:

# Check the Nx configuration
cat nx.json 2>/dev/null | head -50

Repository: rudderlabs/rudder-sdk-js

Length of output: 1129


🏁 Script executed:

# Check if the script mentioned exists
ls -la scripts/make-package-json-publish-ready.sh

Repository: rudderlabs/rudder-sdk-js

Length of output: 159


🏁 Script executed:

# Check Nx projects output format - test what the actual command returns
npx nx show projects --help 2>&1 | head -30

Repository: rudderlabs/rudder-sdk-js

Length of output: 241


🏁 Script executed:

# Try to get actual output from nx show projects to see format
# Since we may not have full git history, let's just check what projects exist
npx nx show projects 2>&1 | head -20

Repository: rudderlabs/rudder-sdk-js

Length of output: 241


🏁 Script executed:

# Check the make-package-json-publish-ready.sh script to understand how it uses the arguments
cat scripts/make-package-json-publish-ready.sh

Repository: rudderlabs/rudder-sdk-js

Length of output: 2798


🏁 Script executed:

# Look for any Nx documentation or config that shows output format
fd -t f "\.npmrc|package.json" -path "root" | head -5

Repository: rudderlabs/rudder-sdk-js

Length of output: 236


🏁 Script executed:

# Check the main package.json for Nx version and any related scripts
cat package.json | grep -A 5 -B 5 "nx\|scripts" | head -40

Repository: rudderlabs/rudder-sdk-js

Length of output: 2743


🏁 Script executed:

# Search for other uses of "npx nx show projects" in the codebase
rg "nx show projects" -A 2 -B 2

Repository: rudderlabs/rudder-sdk-js

Length of output: 921


🏁 Script executed:

# Check GitHub Actions documentation or other workflows that use nx commands
fd -t f "\.yml|\.yaml" .github/workflows | xargs rg "nx show projects" -A 3 -B 1

Repository: rudderlabs/rudder-sdk-js

Length of output: 1644


🏁 Script executed:

# Verify the exact format by looking at the other workflows more closely
cat -n .github/workflows/unit-tests-and-lint.yml | sed -n '40,70p'

Repository: rudderlabs/rudder-sdk-js

Length of output: 1793


🏁 Script executed:

# Check what happens if we try to understand the difference between the workflows
cat -n .github/workflows/security-code-quality-and-bundle-size-checks.yml | sed -n '80,120p'

Repository: rudderlabs/rudder-sdk-js

Length of output: 1864


🏁 Script executed:

# Let's look at the full context of the security workflow to see the complete pattern
cat -n .github/workflows/security-code-quality-and-bundle-size-checks.yml | sed -n '20,60p'

Repository: rudderlabs/rudder-sdk-js

Length of output: 1821


🏁 Script executed:

# Search for the exact line where affected_projects is generated in security workflow
rg "AFFECTED_PROJECTS.*npx nx show" -B 2 -A 5 .github/workflows/

Repository: rudderlabs/rudder-sdk-js

Length of output: 2437


🏁 Script executed:

# Let's verify what Nx output actually looks like by checking the Nx documentation references
rg "tr.*\\\\n" .github/workflows/ -B 2 -A 2

Repository: rudderlabs/rudder-sdk-js

Length of output: 1821


🏁 Script executed:

# Final verification: Check if there are any recent changes or patterns showing how Nx output is handled
rg "paste.*-sd.*," .github/workflows/ -B 1 -A 1

Repository: rudderlabs/rudder-sdk-js

Length of output: 500


Fix Nx output parsing and add error handling for the affected projects lookup.

The npx nx show projects --affected command outputs newline-separated project names (not space-separated). The current code only converts spaces to commas, leaving newlines intact, which breaks the --projects parameter. Additionally, if the Nx command fails, the script silently continues with an empty list instead of failing the workflow.

🛠️ Suggested fix
           # Get affected projects between versions
           AFFECTED_PROJECTS=$(npx nx show projects --affected --base=${{ env.last_monorepo_version }} --head=${{ env.current_monorepo_version }})
+          NX_STATUS=$?
+          if [ $NX_STATUS -ne 0 ]; then
+            echo "Nx affected projects lookup failed (base: ${{ env.last_monorepo_version }}, head: ${{ env.current_monorepo_version }})."
+            exit 1
+          fi

           # Remove unnecessary fields from package.json before publishing
           if [ -n "$AFFECTED_PROJECTS" ]; then
             echo "Cleaning package.json for affected projects..."
             ./scripts/make-package-json-publish-ready.sh $AFFECTED_PROJECTS

             # Convert space-separated projects to comma-separated for Nx
-            PROJECTS_CSV=$(echo $AFFECTED_PROJECTS | tr ' ' ',')
+            PROJECTS_CSV=$(echo "$AFFECTED_PROJECTS" | tr '\n' ',' | tr ' ' ',' | sed 's/,$//')
             echo "Projects to publish: $PROJECTS_CSV"

             TAG_OPTION=""
             if [[ "${{ inputs.environment }}" == "beta" ]]; then
               TAG_OPTION="--tag=beta"
             fi

-            npx nx release publish --verbose --projects=$PROJECTS_CSV $TAG_OPTION
+            npx nx release publish --verbose --projects="$PROJECTS_CSV" $TAG_OPTION
           else
             echo "No affected projects found, skipping publish"
           fi

Other workflows in this repo (unit-tests-and-lint.yml, security-code-quality-and-bundle-size-checks.yml) already handle newline-separated output correctly using tr '\n' ',' or paste -sd ',' -.

🤖 Prompt for AI Agents
In @.github/workflows/deploy-npm.yml around lines 166 - 186, The
AFFECTED_PROJECTS assignment and PROJECTS_CSV conversion are incorrect: capture
newline-separated output from npx nx show projects --affected into
AFFECTED_PROJECTS and fail the job if the nx command returns a non-zero status
or produces no projects; then convert newlines (not spaces) into commas for
PROJECTS_CSV (trim any trailing comma) before passing to npx nx release publish;
update the block that sets AFFECTED_PROJECTS, checks it, computes PROJECTS_CSV,
and handles TAG_OPTION so the workflow errors out on nx failures and supplies a
valid comma-separated --projects value.

Comment on lines +3 to +12
# Accept affected projects as command line arguments
# Usage: ./make-package-json-publish-ready.sh project1 project2 project3 ...
# Example: ./make-package-json-publish-ready.sh $(npx nx show projects --affected)

if [ $# -eq 0 ]; then
echo "No affected projects provided."
echo "Usage: $0 <project1> <project2> ..."
echo "Example: $0 \$(npx nx show projects --affected --base=origin/develop --head=HEAD)"
exit 0
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Return non‑zero when no projects are provided.

Exiting with status 0 can mask an upstream failure or misuse in CI. Consider failing fast here.

🔧 Suggested tweak
-  exit 0
+  exit 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Accept affected projects as command line arguments
# Usage: ./make-package-json-publish-ready.sh project1 project2 project3 ...
# Example: ./make-package-json-publish-ready.sh $(npx nx show projects --affected)
if [ $# -eq 0 ]; then
echo "No affected projects provided."
echo "Usage: $0 <project1> <project2> ..."
echo "Example: $0 \$(npx nx show projects --affected --base=origin/develop --head=HEAD)"
exit 0
fi
# Accept affected projects as command line arguments
# Usage: ./make-package-json-publish-ready.sh project1 project2 project3 ...
# Example: ./make-package-json-publish-ready.sh $(npx nx show projects --affected)
if [ $# -eq 0 ]; then
echo "No affected projects provided."
echo "Usage: $0 <project1> <project2> ..."
echo "Example: $0 \$(npx nx show projects --affected --base=origin/develop --head=HEAD)"
exit 1
fi
🤖 Prompt for AI Agents
In `@scripts/make-package-json-publish-ready.sh` around lines 3 - 12, The script's
no-argument check currently exits with status 0 which hides failures; in the if
[ $# -eq 0 ] block in make-package-json-publish-ready.sh replace exit 0 with a
non-zero exit (e.g., exit 1) so the script fails fast in CI when no projects are
provided and ensure the echo messages remain to show usage context.

Comment on lines 23 to 57
# Convert project names to package directories
for project in "$@"; do
# Extract package name from @rudderstack/analytics-js format
package_name=$(echo "$project" | sed 's/@rudderstack\///')

# Map project names to their directory names
case "$package_name" in
"analytics-js-loading-scripts") package_dir="loading-scripts" ;;
"analytics-js-sanity-suite") package_dir="sanity-suite" ;;
"rudder-sdk-js") package_dir="analytics-v1.1" ;;
*) package_dir="$package_name" ;;
esac

package="$PACKAGES_DIR/$package_dir"

if [ -d "$package" ]; then
PACKAGE_JSON="$package/package.json"
if [ -f "$PACKAGE_JSON" ]; then
echo "Cleaning $PACKAGE_JSON..."
jq 'del(.dependencies, .devDependencies, .peerDependencies, .optionalDependencies, .overrides, .scripts, .browserslist)' "$PACKAGE_JSON" > "$package/package_cleaned.json" && mv "$package/package_cleaned.json" "$PACKAGE_JSON"

# Special handling for legacy SDK (rudder-sdk-js)
if [ "$package_name" = "rudder-sdk-js" ]; then
# Clean and then add back the postinstall script
jq 'del(.dependencies, .devDependencies, .peerDependencies, .optionalDependencies, .overrides, .scripts, .browserslist) | .scripts = {"postinstall": "echo '\''This package is deprecated and no longer maintained. While your events are still being tracked and delivered, we strongly recommend you to migrate to the latest @rudderstack/analytics-js (https://www.npmjs.com/package/@rudderstack/analytics-js) package for enhanced features, security updates, and ongoing support. For more details, visit the migration guide: https://www.rudderstack.com/docs/sources/event-streams/sdks/rudderstack-javascript-sdk/migration-guide/'\''" }' "$PACKAGE_JSON" > "$package/package_cleaned.json" && mv "$package/package_cleaned.json" "$PACKAGE_JSON"
else
# Standard cleaning for all other packages
jq 'del(.dependencies, .devDependencies, .peerDependencies, .optionalDependencies, .overrides, .scripts, .browserslist)' "$PACKAGE_JSON" > "$package/package_cleaned.json" && mv "$package/package_cleaned.json" "$PACKAGE_JSON"
fi
else
echo "No package.json found in $package"
fi
else
echo "Package directory not found: $package"
fi
done
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast when a mapped package directory or package.json is missing.

Continuing after a missing directory or package.json can silently skip cleaning and still allow a publish attempt, which is risky for release automation.

🛠️ Proposed fix
-    else
-      echo "No package.json found in $package"
+    else
+      echo "No package.json found in $package"
+      exit 1
     fi
   else
     echo "Package directory not found: $package"
+    exit 1
   fi
🤖 Prompt for AI Agents
In `@scripts/make-package-json-publish-ready.sh` around lines 23 - 57, The script
currently continues when a mapped package directory or package.json is missing
which risks silent skips; modify the for-loop handling around
package_dir/package to fail fast: after computing
package="$PACKAGES_DIR/$package_dir" check if [ -d "$package" ] and if not echo
an explicit error including "$package" and exit 1; when
PACKAGE_JSON="$package/package.json" is set, if [ ! -f "$PACKAGE_JSON" ] then
echo an explicit error including "$PACKAGE_JSON" and exit 1; also ensure the jq
commands for cleaning (used in the rudder-sdk-js branch and the standard branch)
check their exit status and abort with a non-zero exit if jq fails so the script
never continues on error.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +108 to 151
private startDpEventsQueue() {
const bufferEventsUntilReady = state.loadOptions.value
.bufferDataPlaneEventsUntilReady as boolean;

const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
(dest: Destination) => isHybridModeDestination(dest),
);
const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
(dest: Destination) => isHybridModeDestination(dest),
);
const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist;

// Start the data plane events queue and replay the events from the buffer
// This function is called when the client destinations are ready
// or when the timeout expires
// or when no buffering is required
const startDpQueueAndReplayEvents = () => {
this.isEventBufferingActive = false;
this.eventsBuffer.forEach(event => {
this.enqueue(event);
});

if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}

this.eventsBuffer = [];
};

if (
(hybridDestExist === false || shouldBufferDpEvents === false) &&
!bufferEventsBeforeConsent &&
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true
) {
let timeoutId: number;
// Start the queue when no event buffering is required
// or when buffering is required and the client destinations are ready
effect(() => {
if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
(globalThis as typeof window).clearTimeout(timeoutId);
this.dataplaneEventsQueue?.start();
startDpQueueAndReplayEvents();
}
});

// Force start the data plane events queue processing after a timeout
if (state.loadOptions.value.bufferDataPlaneEventsUntilReady === true) {
if (shouldBufferEvents) {
this.isEventBufferingActive = true;
timeoutId = (globalThis as typeof window).setTimeout(() => {
if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}
startDpQueueAndReplayEvents();
}, state.loadOptions.value.dataPlaneEventsBufferTimeout);
}
}
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 'startDpEventsQueue' method creates a new effect and potentially a new timeout each time it's called. This method is called from both 'init' (line 104) and 'resume' (line 162). If 'resume' is called multiple times (e.g., through repeated consent updates), this could create multiple active effects listening to the same state changes and multiple timeouts trying to start the queue, leading to the queue being started multiple times and potential memory leaks from undisposed effects.

Consider adding guards to prevent multiple invocations, or disposing of previous effects/timeouts before creating new ones.

Copilot uses AI. Check for mistakes.
Comment on lines +121 to 150
const startDpQueueAndReplayEvents = () => {
this.isEventBufferingActive = false;
this.eventsBuffer.forEach(event => {
this.enqueue(event);
});

if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}

this.eventsBuffer = [];
};

if (
(hybridDestExist === false || shouldBufferDpEvents === false) &&
!bufferEventsBeforeConsent &&
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true
) {
let timeoutId: number;
// Start the queue when no event buffering is required
// or when buffering is required and the client destinations are ready
effect(() => {
if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
(globalThis as typeof window).clearTimeout(timeoutId);
this.dataplaneEventsQueue?.start();
startDpQueueAndReplayEvents();
}
});

// Force start the data plane events queue processing after a timeout
if (state.loadOptions.value.bufferDataPlaneEventsUntilReady === true) {
if (shouldBufferEvents) {
this.isEventBufferingActive = true;
timeoutId = (globalThis as typeof window).setTimeout(() => {
if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}
startDpQueueAndReplayEvents();
}, state.loadOptions.value.dataPlaneEventsBufferTimeout);
}
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The startDpQueueAndReplayEvents function could be invoked multiple times due to race conditions:

  1. When clientDestinationsReady becomes true (via the effect on line 138-141)
  2. When the timeout expires (line 147-149)

If both events occur nearly simultaneously, startDpQueueAndReplayEvents could be called twice. While the buffer is cleared after the first replay (line 131), setting isEventBufferingActive to false (line 122) happens before the buffer is cleared, creating a window where:

  • New events coming in won't be buffered (since isEventBufferingActive is false)
  • But they'll be enqueued directly to the queues
  • The queue might start twice (line 128)

Additionally, if the destinations become ready just as the timeout fires, the timeout callback would still execute even though clearTimeout was called, because the clearTimeout happens in an effect which may execute asynchronously relative to the timeout firing.

Consider adding a guard flag to ensure startDpQueueAndReplayEvents only executes once, or clearing/canceling the timeout more reliably before starting the replay.

Copilot uses AI. Check for mistakes.
Comment on lines 75 to 114
if (false) {
// Check if PR is in draft state
if (pr.draft) {
core.setFailed(`PR #${prNumber} is in draft state. Please mark it as ready for review before deploying.`);
return;
}

if (prDetails.mergeable_state === 'blocked') {
reason = 'required checks or approvals are missing';
} else if (prDetails.mergeable_state === 'dirty') {
reason = 'there are merge conflicts';
} else if (prDetails.mergeable_state === 'unstable') {
reason = 'required checks are failing';
} else {
reason = `the mergeable state is "${prDetails.mergeable_state}"`;
// Get detailed PR information including mergeable state
const { data: prDetails } = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber
});

// Check if PR is mergeable and all requirements are satisfied
if (prDetails.mergeable === false) {
core.setFailed(`PR #${prNumber} is not in a mergeable state. Please resolve conflicts before deploying.`);
return;
}

core.setFailed(`PR #${prNumber} is not ready to merge: ${reason}. Please resolve all issues before deploying.`);
return;
// The mergeable_state can be one of: clean, dirty, blocked, unstable, or unknown
// Only 'clean' means all requirements are met (checks passed, approvals received, no conflicts)
if (prDetails.mergeable_state !== 'clean') {
// Get more details about why it's not clean
let reason = '';

if (prDetails.mergeable_state === 'blocked') {
reason = 'required checks or approvals are missing';
} else if (prDetails.mergeable_state === 'dirty') {
reason = 'there are merge conflicts';
} else if (prDetails.mergeable_state === 'unstable') {
reason = 'required checks are failing';
} else {
reason = `the mergeable state is "${prDetails.mergeable_state}"`;
}

core.setFailed(`PR #${prNumber} is not ready to merge: ${reason}. Please resolve all issues before deploying.`);
return;
}
}
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR mergeability validation logic has been wrapped in an 'if (false)' condition, making it dead code that will never execute. This bypasses important checks for PR draft state, merge conflicts, and required approvals before deploying to beta. While this might be intentional for testing purposes, it should either be removed entirely or controlled by a workflow input parameter rather than hardcoded.

If this is meant to be temporary, consider adding a TODO comment explaining why and when it should be re-enabled. Otherwise, remove the dead code entirely.

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings February 5, 2026 15:39
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 102 to +106
const bufferEventsBeforeConsent = shouldBufferEventsForPreConsent(state);
if (!bufferEventsBeforeConsent) {
this.startDpEventsQueue();
}
}
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isEventBufferingActive flag is initialized to true in the constructor (line 58), which means events will be buffered by default even before init() is called. If any events are enqueued before init() completes, they will be buffered.

However, if shouldBufferEventsForPreConsent is true (line 102), startDpEventsQueue() is never called during init(), which means isEventBufferingActive remains true indefinitely until resume() is called. This creates an inconsistent state where:

  1. Events are buffered (isEventBufferingActive = true)
  2. But startDpEventsQueue() was never called, so no effect listeners or timeouts are set up to ever stop the buffering
  3. The buffer will never be replayed unless resume() is called

Consider initializing isEventBufferingActive to false and only setting it to true within startDpEventsQueue when buffering is actually needed.

Copilot uses AI. Check for mistakes.
Comment on lines +306 to 309
this.setActiveDestinations();

// Initialize event manager
this.eventManager?.init();
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The setActiveDestinations call at line 306 is executed before eventManager.init() at line 309. This ensures that state.nativeDestinations.activeDestinations is populated before EventRepository.init() tries to check for hybrid destinations (in startDpEventsQueue at line 112-114).

However, this creates a potential issue: if the configuration hasn't been loaded yet (i.e., the destinations config isn't available), setActiveDestinations may set an empty or incomplete list of destinations. This could cause EventRepository to incorrectly determine whether hybrid destinations exist, leading to improper buffering behavior.

Verify that the configuration is fully loaded and available before calling setActiveDestinations in onPluginsReady, or add defensive checks in startDpEventsQueue to handle cases where activeDestinations might not be fully populated yet.

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings February 23, 2026 06:59
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot was unable to review this pull request because the user who requested the review is ineligible. To be eligible to request a review, you need a paid Copilot license, or your organization must enable Copilot code review.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/analytics-js-integrations/src/integrations/GA4/browser.js (1)

132-145: Consider combining the duplicate conditional blocks.

Both blocks branch on the same this.overrideClientAndSessionId condition. They can be merged into a single if/else for readability.

♻️ Proposed refactor
-    if (!this.overrideClientAndSessionId) {
-      window.gtag('get', this.measurementId, 'session_id', sessionId => {
-        this.sessionId = sessionId;
-      });
-    } else {
-      this.sessionId = this.analytics.getSessionId();
-    }
-    if (!this.overrideClientAndSessionId) {
-      window.gtag('get', this.measurementId, 'client_id', clientId => {
-        this.clientId = clientId;
-      });
-    } else {
-      this.clientId = this.analytics.getAnonymousId();
-    }
+    if (!this.overrideClientAndSessionId) {
+      window.gtag('get', this.measurementId, 'session_id', sessionId => {
+        this.sessionId = sessionId;
+      });
+      window.gtag('get', this.measurementId, 'client_id', clientId => {
+        this.clientId = clientId;
+      });
+    } else {
+      this.sessionId = this.analytics.getSessionId();
+      this.clientId = this.analytics.getAnonymousId();
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js` around
lines 132 - 145, Both blocks duplicate the same if/else on
this.overrideClientAndSessionId to set sessionId and clientId; refactor by
merging into one conditional: if !this.overrideClientAndSessionId call
window.gtag('get', this.measurementId, 'session_id', ...) to set this.sessionId
and window.gtag('get', this.measurementId, 'client_id', ...) to set
this.clientId, else set this.sessionId = this.analytics.getSessionId() and
this.clientId = this.analytics.getAnonymousId(); keep the same measurementId and
callback behavior for gtag calls to preserve existing async assignment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js`:
- Around line 161-170: Fix two issues in isLoaded(): replace the typo
this.getLoadedAt with this.gtagLoadedAt so the timestamp is actually set when
gtagReady first becomes true, and change the session check so it treats an
empty-string initial session as invalid (e.g., compute hasValidSession as
this.sessionNumber !== '' && this.sessionNumber !== null or use
Boolean(this.sessionNumber) if 0 is not a valid session). Update the code in the
isLoaded() method (symbols: isLoaded, gtagLoadedAt, sessionNumber,
LOAD_DELAY_MS) so the 2-second wait logic (LOAD_DELAY_MS) can take effect.

---

Nitpick comments:
In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js`:
- Around line 132-145: Both blocks duplicate the same if/else on
this.overrideClientAndSessionId to set sessionId and clientId; refactor by
merging into one conditional: if !this.overrideClientAndSessionId call
window.gtag('get', this.measurementId, 'session_id', ...) to set this.sessionId
and window.gtag('get', this.measurementId, 'client_id', ...) to set
this.clientId, else set this.sessionId = this.analytics.getSessionId() and
this.clientId = this.analytics.getAnonymousId(); keep the same measurementId and
callback behavior for gtag calls to preserve existing async assignment.

Comment on lines 161 to +170
isLoaded() {
return !!(this.sessionId && this.clientId);
const LOAD_DELAY_MS = 2000;
const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
if (gtagReady && this.getLoadedAt === null) {
this.gtagLoadedAt = Date.now();
}
const hasWaitedLongEnough =
this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
const hasValidSession = this.sessionNumber !== null;
return gtagReady && (hasValidSession || hasWaitedLongEnough);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Two bugs make the 2-second delay logic completely inert.

Bug 1 (Line 164): this.getLoadedAt is a typo — should be this.gtagLoadedAt. this.getLoadedAt is undefined, and undefined === null is false under strict equality, so this.gtagLoadedAt is never set to Date.now(). The timestamp remains null forever, and hasWaitedLongEnough is always false.

Bug 2 (Line 169): this.sessionNumber is initialized to '' (line 31), not null. Since '' !== null is true, hasValidSession is always true from the first invocation — before gtag returns any real session number.

Combined effect: isLoaded() reduces to just gtagReady, completely bypassing the intended 2-second readiness delay.

🐛 Proposed fix
   isLoaded() {
     const LOAD_DELAY_MS = 2000;
     const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
-    if (gtagReady && this.getLoadedAt === null) {
+    if (gtagReady && this.gtagLoadedAt === null) {
       this.gtagLoadedAt = Date.now();
     }
     const hasWaitedLongEnough =
       this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
-    const hasValidSession = this.sessionNumber !== null;
+    const hasValidSession = !!this.sessionNumber;
     return gtagReady && (hasValidSession || hasWaitedLongEnough);
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
isLoaded() {
return !!(this.sessionId && this.clientId);
const LOAD_DELAY_MS = 2000;
const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
if (gtagReady && this.getLoadedAt === null) {
this.gtagLoadedAt = Date.now();
}
const hasWaitedLongEnough =
this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
const hasValidSession = this.sessionNumber !== null;
return gtagReady && (hasValidSession || hasWaitedLongEnough);
isLoaded() {
const LOAD_DELAY_MS = 2000;
const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
if (gtagReady && this.gtagLoadedAt === null) {
this.gtagLoadedAt = Date.now();
}
const hasWaitedLongEnough =
this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
const hasValidSession = !!this.sessionNumber;
return gtagReady && (hasValidSession || hasWaitedLongEnough);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js` around
lines 161 - 170, Fix two issues in isLoaded(): replace the typo this.getLoadedAt
with this.gtagLoadedAt so the timestamp is actually set when gtagReady first
becomes true, and change the session check so it treats an empty-string initial
session as invalid (e.g., compute hasValidSession as this.sessionNumber !== ''
&& this.sessionNumber !== null or use Boolean(this.sessionNumber) if 0 is not a
valid session). Update the code in the isLoaded() method (symbols: isLoaded,
gtagLoadedAt, sessionNumber, LOAD_DELAY_MS) so the 2-second wait logic
(LOAD_DELAY_MS) can take effect.

@saikumarrs saikumarrs force-pushed the feature/sdk-2862-buffer-data-plane-events-until-ready branch from 55eac77 to 4ae3336 Compare February 23, 2026 07:32
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
70.2% Coverage on New Code (required ≥ 85%)
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copilot AI review requested due to automatic review settings February 23, 2026 11:17
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

state.consents.postConsent.value.discardPreConsentEvents
) {
this.dataplaneEventsQueue?.clear();
this.destinationsEventsQueue?.clear();
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If discardPreConsentEvents is true, resume() clears the underlying queues but does not clear eventsBuffer. Any pre-consent events stored in eventsBuffer will still be replayed when buffering is lifted, defeating the discard option. Clear eventsBuffer (and disable buffering) when discarding pre-consent events.

Suggested change
this.destinationsEventsQueue?.clear();
this.destinationsEventsQueue?.clear();
this.eventsBuffer = [];
this.isEventBufferingActive = false;

Copilot uses AI. Check for mistakes.
return !!(this.sessionId && this.clientId);
const LOAD_DELAY_MS = 2000;
const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
if (gtagReady && this.getLoadedAt === null) {
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isLoaded() checks this.getLoadedAt === null, but the field introduced is gtagLoadedAt. As written, this.getLoadedAt is undefined, so gtagLoadedAt may never be set and the load-delay logic won’t behave as intended. Use the correct property name when initializing the timestamp.

Suggested change
if (gtagReady && this.getLoadedAt === null) {
if (gtagReady && this.gtagLoadedAt === null) {

Copilot uses AI. Check for mistakes.
Comment on lines +121 to +131
const startDpQueueAndReplayEvents = () => {
this.isEventBufferingActive = false;
this.eventsBuffer.forEach(event => {
this.enqueue(event);
});

if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}

this.eventsBuffer = [];
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change introduces a new buffering/replay path (eventsBuffer, isEventBufferingActive, startDpQueueAndReplayEvents) plus new discard behavior interactions. Existing EventRepository tests don’t cover buffering an event, lifting buffering, and asserting (a) dataplane enqueue happens once, (b) destinations enqueue behavior is preserved, and (c) discardPreConsentEvents clears the buffer. Adding unit tests for these cases would prevent regressions.

Copilot uses AI. Check for mistakes.
Comment on lines 157 to 171
/**
* If the gtag is successfully initialized, client ID and session ID fields will have valid values for the given GA4 configuration
* If the gtag is successfully initialized, client ID and session ID fields will have valid values for the given GA4 configuration.
* Returns true only after 2 seconds from init() to allow gtag to fully initialize.
*/
isLoaded() {
return !!(this.sessionId && this.clientId);
const LOAD_DELAY_MS = 2000;
const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
if (gtagReady && this.getLoadedAt === null) {
this.gtagLoadedAt = Date.now();
}
const hasWaitedLongEnough =
this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
const hasValidSession = this.sessionNumber !== null;
return gtagReady && (hasValidSession || hasWaitedLongEnough);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isLoaded() now has timing-based readiness logic (gtagLoadedAt + 2s delay) and different readiness criteria than before. There are GA4 integration tests, but none appear to assert the new isLoaded() behavior. Adding unit tests around the delay/sessionNumber conditions would help catch regressions in destination readiness detection.

Copilot uses AI. Check for mistakes.
Comment on lines +112 to +116
const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
(dest: Destination) => isHybridModeDestination(dest),
);
const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist;

Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldBufferEvents is computed once from the current activeDestinations snapshot. If activeDestinations changes later (e.g., consent updates / setActiveDestinations runs again), the buffering decision won’t update because the effect only watches clientDestinationsReady. Consider computing shouldBufferEvents inside the effect (reading activeDestinations.value) or making buffering activation reactive to destination list changes.

Copilot uses AI. Check for mistakes.
Comment on lines +162 to 163
this.startDpEventsQueue();
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resume() calls startDpEventsQueue(), which registers a new effect() and (possibly) a new timeout on every resume. If resume() can be invoked multiple times (e.g., multiple consent updates), this can lead to duplicated effects/timeouts and repeated queue starts. Consider making startDpEventsQueue() idempotent (store/dispose the effect) or starting the queue directly in resume() when appropriate.

Copilot uses AI. Check for mistakes.
Comment on lines +167 to +170
const hasWaitedLongEnough =
this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
const hasValidSession = this.sessionNumber !== null;
return gtagReady && (hasValidSession || hasWaitedLongEnough);
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hasValidSession uses this.sessionNumber !== null, but sessionNumber is initialized to an empty string in the constructor, so this condition is always true. This makes isLoaded() return true as soon as gtagReady is true, bypassing the intended readiness/delay behavior. Consider initializing sessionNumber to null (or checking for a non-empty value) and aligning the validity check accordingly.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Copy Markdown

Hello! This PR has been open for 20 days without any activity. Therefore, it's considered as stale and is scheduled to be closed in 10 days. If you're still working on this, please remove the 'Stale' label or add a comment to keep it open. Thanks for your contribution!

@github-actions github-actions bot added the Stale label Mar 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants