feat: buffer data plane events conditionally#2744
feat: buffer data plane events conditionally#2744saikumarrs wants to merge 9 commits intodevelopfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors destination activation in Analytics, adds event buffering and replay logic to EventRepository, adjusts GA4 integration readiness checks, and modifies CI/CD workflows (deploy-beta gating disabled; deploy-npm publish command simplified). Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR implements conditional buffering of data plane events based on the bufferDataPlaneEventsUntilReady load API option. When this option is enabled and hybrid mode destinations exist, events are buffered until either the client destinations are ready or a configurable timeout expires.
Changes:
- Adds event buffering logic to the EventRepository class with new properties
eventsBufferandisEventBufferingActive - Extracts data plane queue initialization into a new
startDpEventsQueue()method that manages buffering lifecycle - Modifies
enqueue()to buffer events conditionally instead of immediately enqueueing them - Calls
setActiveDestinations()earlier in the Analytics initialization flow to ensure destination information is available before event repository initialization
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| packages/analytics-js/src/components/eventRepository/EventRepository.ts | Implements event buffering mechanism with conditional queue startup, replay logic, and buffer management |
| packages/analytics-js/src/components/core/Analytics.ts | Extracts setActiveDestinations() method and calls it during onPluginsReady() to ensure destinations are set before event processing begins |
Comments suppressed due to low confidence (1)
packages/analytics-js/src/components/eventRepository/EventRepository.ts:198
- When event buffering is active, the callback is invoked immediately on line 198, even though the event is only buffered and not yet enqueued to the data plane or destinations queues. This means callbacks are executed before events are actually processed and sent. From a user perspective, the callback should indicate that the event has been successfully enqueued for processing, but in this case it fires before the event is truly enqueued. Consider deferring callback invocation until events are actually enqueued (either immediately if buffering is inactive, or during replay if buffering is active).
// Invoke the callback if it exists
const apiName = `${event.type.charAt(0).toUpperCase()}${event.type.slice(1)}${API_SUFFIX}`;
safelyInvokeCallback(callback, [dpQEvent], apiName, this.logger);
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (this.isEventBufferingActive) { | ||
| this.eventsBuffer.push(dpQEvent); | ||
| } else { | ||
| this.pluginsManager.invokeSingle( | ||
| `${DATA_PLANE_QUEUE_EXT_POINT_PREFIX}.enqueue`, | ||
| state, | ||
| this.dataplaneEventsQueue, | ||
| dpQEvent, | ||
| this.errorHandler, | ||
| this.logger, | ||
| ); | ||
|
|
||
| const dQEvent = clone(event); | ||
| this.pluginsManager.invokeSingle( | ||
| `${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`, | ||
| state, | ||
| this.destinationsEventsQueue, | ||
| dQEvent, | ||
| this.errorHandler, | ||
| this.logger, | ||
| ); | ||
| } |
There was a problem hiding this comment.
When event buffering is active, the destinations events queue is not being populated. This means that when buffering ends and events are replayed via this.enqueue(event), the destinations will receive events, but they will miss any events that were buffered. The current implementation only buffers events in eventsBuffer for the data plane queue, but doesn't handle the destinations queue during buffering. This could cause inconsistent behavior where device mode destinations don't receive all events that the data plane does.
| let timeoutId: number; | ||
| // Start the queue when no event buffering is required | ||
| // or when buffering is required and the client destinations are ready | ||
| effect(() => { | ||
| if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) { | ||
| (globalThis as typeof window).clearTimeout(timeoutId); |
There was a problem hiding this comment.
The timeoutId variable is declared but might be accessed before it's assigned. If the effect callback on line 137 executes synchronously (when shouldBufferEvents is false), it will try to clear timeoutId on line 139 before it has been assigned any value on line 147. While this won't cause an error (clearTimeout with undefined is safe), it's better to initialize timeoutId to a default value or restructure the code to make the intent clearer.
| const startDpQueueAndReplayEvents = () => { | ||
| this.isEventBufferingActive = false; | ||
| this.eventsBuffer.forEach(event => { | ||
| this.enqueue(event); | ||
| }); |
There was a problem hiding this comment.
The buffer replay logic calls this.enqueue(event) for each buffered event. However, the events in the buffer are already the result of calling getFinalEvent(event, state) on line 171 when they were first buffered. When replayed, enqueue() will call getFinalEvent() again on these already-processed events on line 171. This results in unnecessary double cloning and processing of events. Additionally, since the original event parameter is not available during replay (only dpQEvent is stored), the destinations queue on lines 185-193 will receive the transformed event instead of the original event. Consider storing the original event in the buffer alongside the transformed one, or creating a separate replay method that directly enqueues to the queues without going through the full enqueue() flow.
| effect(() => { | ||
| if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) { | ||
| (globalThis as typeof window).clearTimeout(timeoutId); | ||
| this.dataplaneEventsQueue?.start(); | ||
| startDpQueueAndReplayEvents(); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The effect() function on line 137 returns a dispose function that should be stored and called when appropriate to prevent memory leaks. Without proper cleanup, if startDpEventsQueue() is called multiple times (e.g., through the resume() method), multiple effect subscriptions will accumulate, all watching the same state changes and potentially causing the callback to execute multiple times. Consider storing the dispose function and calling it before the function exits or when the component is being cleaned up.
| timeoutId = (globalThis as typeof window).setTimeout(() => { | ||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
| startDpQueueAndReplayEvents(); | ||
| }, state.loadOptions.value.dataPlaneEventsBufferTimeout); |
There was a problem hiding this comment.
If state.loadOptions.value.dataPlaneEventsBufferTimeout is undefined, null, or 0, the timeout will fire immediately or use these invalid values. While the load options normalization should handle this, it's safer to add a guard or use a default value here to ensure the timeout is always a valid positive number. This prevents unexpected behavior if the normalization logic changes or fails.
| this.destinationsEventsQueue?.clear(); | ||
| } | ||
|
|
||
| this.startDpEventsQueue(); |
There was a problem hiding this comment.
Calling startDpEventsQueue() from the resume() method will create multiple active effect() subscriptions and timeouts each time resume() is called. This can lead to:
- Multiple concurrent effect subscriptions watching the same state changes
- Multiple active timeouts that will all fire
- The
startDpQueueAndReplayEvents()function being called multiple times
This will cause the buffered events to be replayed multiple times and the data plane queue to be started multiple times. Consider adding a guard to prevent multiple calls to startDpEventsQueue(), or ensure that effects and timeouts from previous calls are properly cleaned up.
| this.startDpEventsQueue(); | |
| if (!this.dataplaneEventsQueue) { | |
| this.startDpEventsQueue(); | |
| } |
| effect(() => { | ||
| if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) { | ||
| (globalThis as typeof window).clearTimeout(timeoutId); | ||
| this.dataplaneEventsQueue?.start(); | ||
| startDpQueueAndReplayEvents(); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The effect subscription created on line 137 will execute immediately when shouldBufferEvents is false (when there are no hybrid destinations or buffering is disabled). This will call startDpQueueAndReplayEvents() immediately, which attempts to replay the buffer and start the queue. However, since shouldBufferEvents is evaluated once at the time startDpEventsQueue() is called, if activeDestinations changes later (e.g., after destinations are loaded), the effect won't react to those changes. This could lead to inconsistent behavior if destinations are loaded after the initial check.
| if ( | ||
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true && | ||
| state.consents.postConsent.value.discardPreConsentEvents | ||
| ) { |
There was a problem hiding this comment.
The condition on line 155 checks if scheduleTimeoutActive !== true, but then only clears queues if discardPreConsentEvents is true. This means the data plane queue is started by startDpEventsQueue() on line 162 regardless of whether events should be discarded. If discardPreConsentEvents is false, the queues are not cleared before starting, which is correct. However, if discardPreConsentEvents is true but scheduleTimeoutActive is also true, the queues won't be cleared, which could lead to pre-consent events being sent when they should be discarded. Consider restructuring the logic to make the intent clearer, or ensure that the queue clearing happens independently of the scheduleTimeoutActive check.
| if ( | |
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true && | |
| state.consents.postConsent.value.discardPreConsentEvents | |
| ) { | |
| if (state.consents.postConsent.value.discardPreConsentEvents) { |
| if (this.isEventBufferingActive) { | ||
| this.eventsBuffer.push(dpQEvent); |
There was a problem hiding this comment.
The event buffering and replay logic lacks test coverage. The existing tests verify that the data plane queue starts correctly based on the buffering configuration, but they don't test:
- Whether events are actually buffered when
isEventBufferingActiveis true - Whether buffered events are replayed correctly when buffering ends
- Whether the destinations queue receives the correct events during replay
- Edge cases like multiple calls to
resume()orstartDpEventsQueue()
Consider adding tests to verify the complete buffering and replay flow, including edge cases.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`:
- Around line 154-163: When discardPreConsentEvents is true we clear
dataplaneEventsQueue and destinationsEventsQueue but forget to clear the
in-memory eventsBuffer, so pre-consent events still replay via
startDpEventsQueue; update the conditional branch that checks
state.consents.postConsent.value.discardPreConsentEvents inside EventRepository
to also clear this.eventsBuffer (and any related buffer structures) before
calling this.startDpEventsQueue(), referencing the eventsBuffer property and
startDpEventsQueue() method so all pre-consent data is cleared consistently.
- Around line 108-151: startDpEventsQueue currently enables buffering and arms
the timeout regardless of clientDestinationsReady, which can re-enable buffering
after immediate readiness; modify the logic so you only set
this.isEventBufferingActive = true and assign timeoutId (and call setTimeout)
when shouldBufferEvents is true AND
state.nativeDestinations.clientDestinationsReady.value is false, leaving
startDpQueueAndReplayEvents and the effect that clears the timeout to run as-is;
in other words, move/guard the block that sets isEventBufferingActive and starts
the timeout behind a check of
state.nativeDestinations.clientDestinationsReady.value inside startDpEventsQueue
so buffering is not re-enabled when destinations are already ready (refer to
startDpEventsQueue, startDpQueueAndReplayEvents, shouldBufferEvents, timeoutId,
isEventBufferingActive).
- Around line 173-193: The buffered path pushes dpQEvent (finalized via
getFinalEvent/merged integrations) into eventsBuffer while the non-buffered path
sends a cloned original event to destinations, causing inconsistent shapes; fix
by making destinations receive the same finalized event: either push the
original event into eventsBuffer and call getFinalEvent/merge integrations at
replay time, or in the non-buffered branch replace clone(event) with the same
finalized dpQEvent (or call getFinalEvent into dQEvent) before invoking
`${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue` so that
pluginsManager.invokeSingle always receives the merged/final event for
destinations; update references around isEventBufferingActive, eventsBuffer,
dpQEvent, clone(event), and the `${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`
invocation accordingly.
| private startDpEventsQueue() { | ||
| const bufferEventsUntilReady = state.loadOptions.value | ||
| .bufferDataPlaneEventsUntilReady as boolean; | ||
|
|
||
| const hybridDestExist = state.nativeDestinations.activeDestinations.value.some( | ||
| (dest: Destination) => isHybridModeDestination(dest), | ||
| ); | ||
| const hybridDestExist = state.nativeDestinations.activeDestinations.value.some( | ||
| (dest: Destination) => isHybridModeDestination(dest), | ||
| ); | ||
| const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist; | ||
|
|
||
| // Start the data plane events queue and replay the events from the buffer | ||
| // This function is called when the client destinations are ready | ||
| // or when the timeout expires | ||
| // or when no buffering is required | ||
| const startDpQueueAndReplayEvents = () => { | ||
| this.isEventBufferingActive = false; | ||
| this.eventsBuffer.forEach(event => { | ||
| this.enqueue(event); | ||
| }); | ||
|
|
||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
|
|
||
| this.eventsBuffer = []; | ||
| }; | ||
|
|
||
| if ( | ||
| (hybridDestExist === false || shouldBufferDpEvents === false) && | ||
| !bufferEventsBeforeConsent && | ||
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true | ||
| ) { | ||
| let timeoutId: number; | ||
| // Start the queue when no event buffering is required | ||
| // or when buffering is required and the client destinations are ready | ||
| effect(() => { | ||
| if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) { | ||
| (globalThis as typeof window).clearTimeout(timeoutId); | ||
| this.dataplaneEventsQueue?.start(); | ||
| startDpQueueAndReplayEvents(); | ||
| } | ||
| }); | ||
|
|
||
| // Force start the data plane events queue processing after a timeout | ||
| if (state.loadOptions.value.bufferDataPlaneEventsUntilReady === true) { | ||
| if (shouldBufferEvents) { | ||
| this.isEventBufferingActive = true; | ||
| timeoutId = (globalThis as typeof window).setTimeout(() => { | ||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
| startDpQueueAndReplayEvents(); | ||
| }, state.loadOptions.value.dataPlaneEventsBufferTimeout); | ||
| } | ||
| } |
There was a problem hiding this comment.
Prevent buffering from being re-enabled after immediate readiness.
If startDpEventsQueue runs when clientDestinationsReady is already true, the effect (Line 137-Line 141) disables buffering, but the later if (shouldBufferEvents) block (Line 145-Line 149) sets it back to true and arms the timeout. That can unnecessarily delay events for the full timeout. Reorder the flag/timeout setup and only arm the timeout when destinations aren’t ready.
🛠️ Suggested fix
private startDpEventsQueue() {
const bufferEventsUntilReady = state.loadOptions.value
.bufferDataPlaneEventsUntilReady as boolean;
const hybridDestExist = state.nativeDestinations.activeDestinations.value.some(
(dest: Destination) => isHybridModeDestination(dest),
);
const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist;
// Start the data plane events queue and replay the events from the buffer
// This function is called when the client destinations are ready
// or when the timeout expires
// or when no buffering is required
const startDpQueueAndReplayEvents = () => {
this.isEventBufferingActive = false;
this.eventsBuffer.forEach(event => {
this.enqueue(event);
});
if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) {
this.dataplaneEventsQueue?.start();
}
this.eventsBuffer = [];
};
- let timeoutId: number;
- // Start the queue when no event buffering is required
- // or when buffering is required and the client destinations are ready
- effect(() => {
- if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
- (globalThis as typeof window).clearTimeout(timeoutId);
- startDpQueueAndReplayEvents();
- }
- });
-
- // Force start the data plane events queue processing after a timeout
- if (shouldBufferEvents) {
- this.isEventBufferingActive = true;
- timeoutId = (globalThis as typeof window).setTimeout(() => {
- startDpQueueAndReplayEvents();
- }, state.loadOptions.value.dataPlaneEventsBufferTimeout);
- }
+ this.isEventBufferingActive = shouldBufferEvents;
+ let timeoutId: number | undefined;
+
+ // Force start the data plane events queue processing after a timeout
+ if (shouldBufferEvents && !state.nativeDestinations.clientDestinationsReady.value) {
+ timeoutId = (globalThis as typeof window).setTimeout(() => {
+ startDpQueueAndReplayEvents();
+ }, state.loadOptions.value.dataPlaneEventsBufferTimeout);
+ }
+
+ // Start the queue when no event buffering is required
+ // or when buffering is required and the client destinations are ready
+ effect(() => {
+ if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) {
+ if (timeoutId !== undefined) {
+ (globalThis as typeof window).clearTimeout(timeoutId);
+ }
+ startDpQueueAndReplayEvents();
+ }
+ });
}🤖 Prompt for AI Agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`
around lines 108 - 151, startDpEventsQueue currently enables buffering and arms
the timeout regardless of clientDestinationsReady, which can re-enable buffering
after immediate readiness; modify the logic so you only set
this.isEventBufferingActive = true and assign timeoutId (and call setTimeout)
when shouldBufferEvents is true AND
state.nativeDestinations.clientDestinationsReady.value is false, leaving
startDpQueueAndReplayEvents and the effect that clears the timeout to run as-is;
in other words, move/guard the block that sets isEventBufferingActive and starts
the timeout behind a check of
state.nativeDestinations.clientDestinationsReady.value inside startDpEventsQueue
so buffering is not re-enabled when destinations are already ready (refer to
startDpEventsQueue, startDpQueueAndReplayEvents, shouldBufferEvents, timeoutId,
isEventBufferingActive).
| if ( | ||
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true && | ||
| state.consents.postConsent.value.discardPreConsentEvents | ||
| ) { | ||
| this.dataplaneEventsQueue?.clear(); | ||
| this.destinationsEventsQueue?.clear(); | ||
| } | ||
|
|
||
| this.startDpEventsQueue(); | ||
| } |
There was a problem hiding this comment.
Discarded pre-consent events still replay from eventsBuffer.
When discardPreConsentEvents is true (Line 156-Line 159), queues are cleared but eventsBuffer is not, so buffered pre-consent events will still replay in startDpEventsQueue. Clear the buffer in the same branch.
🛠️ Suggested fix
if (
this.dataplaneEventsQueue?.scheduleTimeoutActive !== true &&
state.consents.postConsent.value.discardPreConsentEvents
) {
this.dataplaneEventsQueue?.clear();
this.destinationsEventsQueue?.clear();
+ this.eventsBuffer = [];
}
this.startDpEventsQueue();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ( | |
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true && | |
| state.consents.postConsent.value.discardPreConsentEvents | |
| ) { | |
| this.dataplaneEventsQueue?.clear(); | |
| this.destinationsEventsQueue?.clear(); | |
| } | |
| this.startDpEventsQueue(); | |
| } | |
| if ( | |
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true && | |
| state.consents.postConsent.value.discardPreConsentEvents | |
| ) { | |
| this.dataplaneEventsQueue?.clear(); | |
| this.destinationsEventsQueue?.clear(); | |
| this.eventsBuffer = []; | |
| } | |
| this.startDpEventsQueue(); | |
| } |
🤖 Prompt for AI Agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`
around lines 154 - 163, When discardPreConsentEvents is true we clear
dataplaneEventsQueue and destinationsEventsQueue but forget to clear the
in-memory eventsBuffer, so pre-consent events still replay via
startDpEventsQueue; update the conditional branch that checks
state.consents.postConsent.value.discardPreConsentEvents inside EventRepository
to also clear this.eventsBuffer (and any related buffer structures) before
calling this.startDpEventsQueue(), referencing the eventsBuffer property and
startDpEventsQueue() method so all pre-consent data is cleared consistently.
| if (this.isEventBufferingActive) { | ||
| this.eventsBuffer.push(dpQEvent); | ||
| } else { | ||
| this.pluginsManager.invokeSingle( | ||
| `${DATA_PLANE_QUEUE_EXT_POINT_PREFIX}.enqueue`, | ||
| state, | ||
| this.dataplaneEventsQueue, | ||
| dpQEvent, | ||
| this.errorHandler, | ||
| this.logger, | ||
| ); | ||
|
|
||
| const dQEvent = clone(event); | ||
| this.pluginsManager.invokeSingle( | ||
| `${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`, | ||
| state, | ||
| this.destinationsEventsQueue, | ||
| dQEvent, | ||
| this.errorHandler, | ||
| this.logger, | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's run the provided script to inspect getFinalEvent behavior
rg -n "getFinalEvent" packages/analytics-js/src/components/eventRepository -g '*.{ts,tsx}' -C3Repository: rudderlabs/rudder-sdk-js
Length of output: 2891
🏁 Script executed:
#!/bin/bash
# Let's examine the EventRepository.ts file structure around the lines mentioned
cd packages/analytics-js && head -200 src/components/eventRepository/EventRepository.ts | tail -50Repository: rudderlabs/rudder-sdk-js
Length of output: 1400
🏁 Script executed:
#!/bin/bash
# Search for where events are replayed from buffer to understand the full flow
rg -n "eventsBuffer" packages/analytics-js/src/components/eventRepository -g '*.{ts,tsx}' -A2 -B2Repository: rudderlabs/rudder-sdk-js
Length of output: 2580
🏁 Script executed:
#!/bin/bash
# Look for the enqueue method implementations and how they handle events
rg -n "enqueue" packages/analytics-js/src/components/eventRepository -g '*.{ts,tsx}' -A3Repository: rudderlabs/rudder-sdk-js
Length of output: 2148
🏁 Script executed:
#!/bin/bash
# Get the complete getFinalEvent function implementation
sed -n '31,45p' packages/analytics-js/src/components/eventRepository/utils.tsRepository: rudderlabs/rudder-sdk-js
Length of output: 739
🏁 Script executed:
#!/bin/bash
# Look at the getOverriddenIntegrationOptions function to understand what getFinalEvent does
rg -n "getOverriddenIntegrationOptions" packages/analytics-js/src/components/eventRepository/utils.ts -A10Repository: rudderlabs/rudder-sdk-js
Length of output: 1022
🏁 Script executed:
#!/bin/bash
# Check if there are any tests that verify the buffering behavior
find packages/analytics-js -name '*.test.ts' -o -name '*.spec.ts' | xargs rg -l "eventsBuffer\|EventRepository" 2>/dev/nullRepository: rudderlabs/rudder-sdk-js
Length of output: 52
Event shape inconsistency between buffered and non-buffered paths.
When buffering is active, dpQEvent (finalized with merged integrations config) is stored and replayed via enqueue, which sends the finalized event to destinations. In the non-buffered path, destinations receive the original event via clone(event) without the integrations config merge applied. This causes destinations to receive different event shapes depending on whether buffering was used. Consider either buffering the original event and applying getFinalEvent only during replay, or ensuring destinations consistently receive the same event shape in both paths.
🤖 Prompt for AI Agents
In `@packages/analytics-js/src/components/eventRepository/EventRepository.ts`
around lines 173 - 193, The buffered path pushes dpQEvent (finalized via
getFinalEvent/merged integrations) into eventsBuffer while the non-buffered path
sends a cloned original event to destinations, causing inconsistent shapes; fix
by making destinations receive the same finalized event: either push the
original event into eventsBuffer and call getFinalEvent/merge integrations at
replay time, or in the non-buffered branch replace clone(event) with the same
finalized dpQEvent (or call getFinalEvent into dQEvent) before invoking
`${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue` so that
pluginsManager.invokeSingle always receives the merged/final event for
destinations; update references around isEventBufferingActive, eventsBuffer,
dpQEvent, clone(event), and the `${DESTINATIONS_QUEUE_EXT_POINT_PREFIX}.enqueue`
invocation accordingly.
size-limit report 📦
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@scripts/make-package-json-publish-ready.sh`:
- Around line 12-19: The nx lookup currently silences stderr and ignores exit
codes causing empty AFFECTED_PROJECTS to be treated as "no changes"; run the
command without redirecting stderr (or capture both output and status) and check
its exit status immediately after invoking npx nx show projects --affected
--base="$BASE_REF" --head="$HEAD_REF"; if the command fails (non‑zero exit),
write an error message and exit with a non‑zero status instead of continuing, so
AFFECTED_PROJECTS is only used when the nx command actually succeeded.
🧹 Nitpick comments (1)
.github/workflows/deploy-npm.yml (1)
166-168: Quote version arguments when invoking the script.It’s safer to quote these refs to avoid accidental word-splitting if the values are empty or contain unexpected whitespace.
🛠️ Suggested fix
- ./scripts/make-package-json-publish-ready.sh ${{ env.last_monorepo_version }} ${{ env.current_monorepo_version }} + ./scripts/make-package-json-publish-ready.sh "${{ env.last_monorepo_version }}" "${{ env.current_monorepo_version }}"
| # Get affected projects using Nx | ||
| echo "Getting affected packages (base: $BASE_REF, head: $HEAD_REF)..." | ||
| AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF" 2>/dev/null) | ||
|
|
||
| if [ -z "$AFFECTED_PROJECTS" ]; then | ||
| echo "No affected packages found." | ||
| exit 0 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n scripts/make-package-json-publish-ready.shRepository: rudderlabs/rudder-sdk-js
Length of output: 3208
🏁 Script executed:
rg -l "npx nx show projects" --type shRepository: rudderlabs/rudder-sdk-js
Length of output: 110
🏁 Script executed:
rg "AFFECTED_PROJECTS" scripts/ --type sh -B 2 -A 2Repository: rudderlabs/rudder-sdk-js
Length of output: 1320
Fail fast when Nx affected lookup errors out.
npx nx show projects --affected stderr is suppressed and its exit code is ignored. If Nx fails (bad refs, missing project graph), AFFECTED_PROJECTS becomes empty and the script exits with status 0, skipping the cleanup loop and risking publication of uncleaned package.json files. Capture the exit status and abort on failure.
🛠️ Suggested fix
-AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF" 2>/dev/null)
+AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF")
+NX_STATUS=$?
+if [ $NX_STATUS -ne 0 ]; then
+ echo "Nx affected projects lookup failed (base: $BASE_REF, head: $HEAD_REF)."
+ exit 1
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Get affected projects using Nx | |
| echo "Getting affected packages (base: $BASE_REF, head: $HEAD_REF)..." | |
| AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF" 2>/dev/null) | |
| if [ -z "$AFFECTED_PROJECTS" ]; then | |
| echo "No affected packages found." | |
| exit 0 | |
| fi | |
| # Get affected projects using Nx | |
| echo "Getting affected packages (base: $BASE_REF, head: $HEAD_REF)..." | |
| AFFECTED_PROJECTS=$(npx nx show projects --affected --base="$BASE_REF" --head="$HEAD_REF") | |
| NX_STATUS=$? | |
| if [ $NX_STATUS -ne 0 ]; then | |
| echo "Nx affected projects lookup failed (base: $BASE_REF, head: $HEAD_REF)." | |
| exit 1 | |
| fi | |
| if [ -z "$AFFECTED_PROJECTS" ]; then | |
| echo "No affected packages found." | |
| exit 0 | |
| fi |
🤖 Prompt for AI Agents
In `@scripts/make-package-json-publish-ready.sh` around lines 12 - 19, The nx
lookup currently silences stderr and ignores exit codes causing empty
AFFECTED_PROJECTS to be treated as "no changes"; run the command without
redirecting stderr (or capture both output and status) and check its exit status
immediately after invoking npx nx show projects --affected --base="$BASE_REF"
--head="$HEAD_REF"; if the command fails (non‑zero exit), write an error message
and exit with a non‑zero status instead of continuing, so AFFECTED_PROJECTS is
only used when the nx command actually succeeded.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @.github/workflows/deploy-npm.yml:
- Around line 166-186: The AFFECTED_PROJECTS assignment and PROJECTS_CSV
conversion are incorrect: capture newline-separated output from npx nx show
projects --affected into AFFECTED_PROJECTS and fail the job if the nx command
returns a non-zero status or produces no projects; then convert newlines (not
spaces) into commas for PROJECTS_CSV (trim any trailing comma) before passing to
npx nx release publish; update the block that sets AFFECTED_PROJECTS, checks it,
computes PROJECTS_CSV, and handles TAG_OPTION so the workflow errors out on nx
failures and supplies a valid comma-separated --projects value.
In `@scripts/make-package-json-publish-ready.sh`:
- Around line 3-12: The script's no-argument check currently exits with status 0
which hides failures; in the if [ $# -eq 0 ] block in
make-package-json-publish-ready.sh replace exit 0 with a non-zero exit (e.g.,
exit 1) so the script fails fast in CI when no projects are provided and ensure
the echo messages remain to show usage context.
- Around line 23-57: The script currently continues when a mapped package
directory or package.json is missing which risks silent skips; modify the
for-loop handling around package_dir/package to fail fast: after computing
package="$PACKAGES_DIR/$package_dir" check if [ -d "$package" ] and if not echo
an explicit error including "$package" and exit 1; when
PACKAGE_JSON="$package/package.json" is set, if [ ! -f "$PACKAGE_JSON" ] then
echo an explicit error including "$PACKAGE_JSON" and exit 1; also ensure the jq
commands for cleaning (used in the rudder-sdk-js branch and the standard branch)
check their exit status and abort with a non-zero exit if jq fails so the script
never continues on error.
.github/workflows/deploy-npm.yml
Outdated
| # Get affected projects between versions | ||
| AFFECTED_PROJECTS=$(npx nx show projects --affected --base=${{ env.last_monorepo_version }} --head=${{ env.current_monorepo_version }}) | ||
|
|
||
| # Remove unnecessary fields from package.json before publishing | ||
| ./scripts/make-package-json-publish-ready.sh | ||
| if [ -n "$AFFECTED_PROJECTS" ]; then | ||
| echo "Cleaning package.json for affected projects..." | ||
| ./scripts/make-package-json-publish-ready.sh $AFFECTED_PROJECTS | ||
|
|
||
| TAG_OPTION="" | ||
| if [[ "${{ inputs.environment }}" == "beta" ]]; then | ||
| TAG_OPTION="--tag=beta" | ||
| fi | ||
| # Convert space-separated projects to comma-separated for Nx | ||
| PROJECTS_CSV=$(echo $AFFECTED_PROJECTS | tr ' ' ',') | ||
| echo "Projects to publish: $PROJECTS_CSV" | ||
|
|
||
| TAG_OPTION="" | ||
| if [[ "${{ inputs.environment }}" == "beta" ]]; then | ||
| TAG_OPTION="--tag=beta" | ||
| fi | ||
|
|
||
| npx nx release publish --verbose --base=${{ env.last_monorepo_version }} --head=${{ env.current_monorepo_version }} $TAG_OPTION | ||
| npx nx release publish --verbose --projects=$PROJECTS_CSV $TAG_OPTION | ||
| else | ||
| echo "No affected projects found, skipping publish" | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the repo structure and Nx configuration
ls -la | head -20Repository: rudderlabs/rudder-sdk-js
Length of output: 1246
🏁 Script executed:
# Check if Nx is available and examine nx.json
fd -t f "nx.json" -o "package.json" | head -5Repository: rudderlabs/rudder-sdk-js
Length of output: 211
🏁 Script executed:
# Look at the actual workflow file to understand the context better
cat -n .github/workflows/deploy-npm.yml | head -100Repository: rudderlabs/rudder-sdk-js
Length of output: 4363
🏁 Script executed:
# Read the section around lines 166-186
cat -n .github/workflows/deploy-npm.yml | sed -n '160,195p'Repository: rudderlabs/rudder-sdk-js
Length of output: 1720
🏁 Script executed:
# Check the Nx configuration
cat nx.json 2>/dev/null | head -50Repository: rudderlabs/rudder-sdk-js
Length of output: 1129
🏁 Script executed:
# Check if the script mentioned exists
ls -la scripts/make-package-json-publish-ready.shRepository: rudderlabs/rudder-sdk-js
Length of output: 159
🏁 Script executed:
# Check Nx projects output format - test what the actual command returns
npx nx show projects --help 2>&1 | head -30Repository: rudderlabs/rudder-sdk-js
Length of output: 241
🏁 Script executed:
# Try to get actual output from nx show projects to see format
# Since we may not have full git history, let's just check what projects exist
npx nx show projects 2>&1 | head -20Repository: rudderlabs/rudder-sdk-js
Length of output: 241
🏁 Script executed:
# Check the make-package-json-publish-ready.sh script to understand how it uses the arguments
cat scripts/make-package-json-publish-ready.shRepository: rudderlabs/rudder-sdk-js
Length of output: 2798
🏁 Script executed:
# Look for any Nx documentation or config that shows output format
fd -t f "\.npmrc|package.json" -path "root" | head -5Repository: rudderlabs/rudder-sdk-js
Length of output: 236
🏁 Script executed:
# Check the main package.json for Nx version and any related scripts
cat package.json | grep -A 5 -B 5 "nx\|scripts" | head -40Repository: rudderlabs/rudder-sdk-js
Length of output: 2743
🏁 Script executed:
# Search for other uses of "npx nx show projects" in the codebase
rg "nx show projects" -A 2 -B 2Repository: rudderlabs/rudder-sdk-js
Length of output: 921
🏁 Script executed:
# Check GitHub Actions documentation or other workflows that use nx commands
fd -t f "\.yml|\.yaml" .github/workflows | xargs rg "nx show projects" -A 3 -B 1Repository: rudderlabs/rudder-sdk-js
Length of output: 1644
🏁 Script executed:
# Verify the exact format by looking at the other workflows more closely
cat -n .github/workflows/unit-tests-and-lint.yml | sed -n '40,70p'Repository: rudderlabs/rudder-sdk-js
Length of output: 1793
🏁 Script executed:
# Check what happens if we try to understand the difference between the workflows
cat -n .github/workflows/security-code-quality-and-bundle-size-checks.yml | sed -n '80,120p'Repository: rudderlabs/rudder-sdk-js
Length of output: 1864
🏁 Script executed:
# Let's look at the full context of the security workflow to see the complete pattern
cat -n .github/workflows/security-code-quality-and-bundle-size-checks.yml | sed -n '20,60p'Repository: rudderlabs/rudder-sdk-js
Length of output: 1821
🏁 Script executed:
# Search for the exact line where affected_projects is generated in security workflow
rg "AFFECTED_PROJECTS.*npx nx show" -B 2 -A 5 .github/workflows/Repository: rudderlabs/rudder-sdk-js
Length of output: 2437
🏁 Script executed:
# Let's verify what Nx output actually looks like by checking the Nx documentation references
rg "tr.*\\\\n" .github/workflows/ -B 2 -A 2Repository: rudderlabs/rudder-sdk-js
Length of output: 1821
🏁 Script executed:
# Final verification: Check if there are any recent changes or patterns showing how Nx output is handled
rg "paste.*-sd.*," .github/workflows/ -B 1 -A 1Repository: rudderlabs/rudder-sdk-js
Length of output: 500
Fix Nx output parsing and add error handling for the affected projects lookup.
The npx nx show projects --affected command outputs newline-separated project names (not space-separated). The current code only converts spaces to commas, leaving newlines intact, which breaks the --projects parameter. Additionally, if the Nx command fails, the script silently continues with an empty list instead of failing the workflow.
🛠️ Suggested fix
# Get affected projects between versions
AFFECTED_PROJECTS=$(npx nx show projects --affected --base=${{ env.last_monorepo_version }} --head=${{ env.current_monorepo_version }})
+ NX_STATUS=$?
+ if [ $NX_STATUS -ne 0 ]; then
+ echo "Nx affected projects lookup failed (base: ${{ env.last_monorepo_version }}, head: ${{ env.current_monorepo_version }})."
+ exit 1
+ fi
# Remove unnecessary fields from package.json before publishing
if [ -n "$AFFECTED_PROJECTS" ]; then
echo "Cleaning package.json for affected projects..."
./scripts/make-package-json-publish-ready.sh $AFFECTED_PROJECTS
# Convert space-separated projects to comma-separated for Nx
- PROJECTS_CSV=$(echo $AFFECTED_PROJECTS | tr ' ' ',')
+ PROJECTS_CSV=$(echo "$AFFECTED_PROJECTS" | tr '\n' ',' | tr ' ' ',' | sed 's/,$//')
echo "Projects to publish: $PROJECTS_CSV"
TAG_OPTION=""
if [[ "${{ inputs.environment }}" == "beta" ]]; then
TAG_OPTION="--tag=beta"
fi
- npx nx release publish --verbose --projects=$PROJECTS_CSV $TAG_OPTION
+ npx nx release publish --verbose --projects="$PROJECTS_CSV" $TAG_OPTION
else
echo "No affected projects found, skipping publish"
fiOther workflows in this repo (unit-tests-and-lint.yml, security-code-quality-and-bundle-size-checks.yml) already handle newline-separated output correctly using tr '\n' ',' or paste -sd ',' -.
🤖 Prompt for AI Agents
In @.github/workflows/deploy-npm.yml around lines 166 - 186, The
AFFECTED_PROJECTS assignment and PROJECTS_CSV conversion are incorrect: capture
newline-separated output from npx nx show projects --affected into
AFFECTED_PROJECTS and fail the job if the nx command returns a non-zero status
or produces no projects; then convert newlines (not spaces) into commas for
PROJECTS_CSV (trim any trailing comma) before passing to npx nx release publish;
update the block that sets AFFECTED_PROJECTS, checks it, computes PROJECTS_CSV,
and handles TAG_OPTION so the workflow errors out on nx failures and supplies a
valid comma-separated --projects value.
| # Accept affected projects as command line arguments | ||
| # Usage: ./make-package-json-publish-ready.sh project1 project2 project3 ... | ||
| # Example: ./make-package-json-publish-ready.sh $(npx nx show projects --affected) | ||
|
|
||
| if [ $# -eq 0 ]; then | ||
| echo "No affected projects provided." | ||
| echo "Usage: $0 <project1> <project2> ..." | ||
| echo "Example: $0 \$(npx nx show projects --affected --base=origin/develop --head=HEAD)" | ||
| exit 0 | ||
| fi |
There was a problem hiding this comment.
Return non‑zero when no projects are provided.
Exiting with status 0 can mask an upstream failure or misuse in CI. Consider failing fast here.
🔧 Suggested tweak
- exit 0
+ exit 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Accept affected projects as command line arguments | |
| # Usage: ./make-package-json-publish-ready.sh project1 project2 project3 ... | |
| # Example: ./make-package-json-publish-ready.sh $(npx nx show projects --affected) | |
| if [ $# -eq 0 ]; then | |
| echo "No affected projects provided." | |
| echo "Usage: $0 <project1> <project2> ..." | |
| echo "Example: $0 \$(npx nx show projects --affected --base=origin/develop --head=HEAD)" | |
| exit 0 | |
| fi | |
| # Accept affected projects as command line arguments | |
| # Usage: ./make-package-json-publish-ready.sh project1 project2 project3 ... | |
| # Example: ./make-package-json-publish-ready.sh $(npx nx show projects --affected) | |
| if [ $# -eq 0 ]; then | |
| echo "No affected projects provided." | |
| echo "Usage: $0 <project1> <project2> ..." | |
| echo "Example: $0 \$(npx nx show projects --affected --base=origin/develop --head=HEAD)" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
In `@scripts/make-package-json-publish-ready.sh` around lines 3 - 12, The script's
no-argument check currently exits with status 0 which hides failures; in the if
[ $# -eq 0 ] block in make-package-json-publish-ready.sh replace exit 0 with a
non-zero exit (e.g., exit 1) so the script fails fast in CI when no projects are
provided and ensure the echo messages remain to show usage context.
| # Convert project names to package directories | ||
| for project in "$@"; do | ||
| # Extract package name from @rudderstack/analytics-js format | ||
| package_name=$(echo "$project" | sed 's/@rudderstack\///') | ||
|
|
||
| # Map project names to their directory names | ||
| case "$package_name" in | ||
| "analytics-js-loading-scripts") package_dir="loading-scripts" ;; | ||
| "analytics-js-sanity-suite") package_dir="sanity-suite" ;; | ||
| "rudder-sdk-js") package_dir="analytics-v1.1" ;; | ||
| *) package_dir="$package_name" ;; | ||
| esac | ||
|
|
||
| package="$PACKAGES_DIR/$package_dir" | ||
|
|
||
| if [ -d "$package" ]; then | ||
| PACKAGE_JSON="$package/package.json" | ||
| if [ -f "$PACKAGE_JSON" ]; then | ||
| echo "Cleaning $PACKAGE_JSON..." | ||
| jq 'del(.dependencies, .devDependencies, .peerDependencies, .optionalDependencies, .overrides, .scripts, .browserslist)' "$PACKAGE_JSON" > "$package/package_cleaned.json" && mv "$package/package_cleaned.json" "$PACKAGE_JSON" | ||
|
|
||
| # Special handling for legacy SDK (rudder-sdk-js) | ||
| if [ "$package_name" = "rudder-sdk-js" ]; then | ||
| # Clean and then add back the postinstall script | ||
| jq 'del(.dependencies, .devDependencies, .peerDependencies, .optionalDependencies, .overrides, .scripts, .browserslist) | .scripts = {"postinstall": "echo '\''This package is deprecated and no longer maintained. While your events are still being tracked and delivered, we strongly recommend you to migrate to the latest @rudderstack/analytics-js (https://www.npmjs.com/package/@rudderstack/analytics-js) package for enhanced features, security updates, and ongoing support. For more details, visit the migration guide: https://www.rudderstack.com/docs/sources/event-streams/sdks/rudderstack-javascript-sdk/migration-guide/'\''" }' "$PACKAGE_JSON" > "$package/package_cleaned.json" && mv "$package/package_cleaned.json" "$PACKAGE_JSON" | ||
| else | ||
| # Standard cleaning for all other packages | ||
| jq 'del(.dependencies, .devDependencies, .peerDependencies, .optionalDependencies, .overrides, .scripts, .browserslist)' "$PACKAGE_JSON" > "$package/package_cleaned.json" && mv "$package/package_cleaned.json" "$PACKAGE_JSON" | ||
| fi | ||
| else | ||
| echo "No package.json found in $package" | ||
| fi | ||
| else | ||
| echo "Package directory not found: $package" | ||
| fi | ||
| done |
There was a problem hiding this comment.
Fail fast when a mapped package directory or package.json is missing.
Continuing after a missing directory or package.json can silently skip cleaning and still allow a publish attempt, which is risky for release automation.
🛠️ Proposed fix
- else
- echo "No package.json found in $package"
+ else
+ echo "No package.json found in $package"
+ exit 1
fi
else
echo "Package directory not found: $package"
+ exit 1
fi🤖 Prompt for AI Agents
In `@scripts/make-package-json-publish-ready.sh` around lines 23 - 57, The script
currently continues when a mapped package directory or package.json is missing
which risks silent skips; modify the for-loop handling around
package_dir/package to fail fast: after computing
package="$PACKAGES_DIR/$package_dir" check if [ -d "$package" ] and if not echo
an explicit error including "$package" and exit 1; when
PACKAGE_JSON="$package/package.json" is set, if [ ! -f "$PACKAGE_JSON" ] then
echo an explicit error including "$PACKAGE_JSON" and exit 1; also ensure the jq
commands for cleaning (used in the rudder-sdk-js branch and the standard branch)
check their exit status and abort with a non-zero exit if jq fails so the script
never continues on error.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| private startDpEventsQueue() { | ||
| const bufferEventsUntilReady = state.loadOptions.value | ||
| .bufferDataPlaneEventsUntilReady as boolean; | ||
|
|
||
| const hybridDestExist = state.nativeDestinations.activeDestinations.value.some( | ||
| (dest: Destination) => isHybridModeDestination(dest), | ||
| ); | ||
| const hybridDestExist = state.nativeDestinations.activeDestinations.value.some( | ||
| (dest: Destination) => isHybridModeDestination(dest), | ||
| ); | ||
| const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist; | ||
|
|
||
| // Start the data plane events queue and replay the events from the buffer | ||
| // This function is called when the client destinations are ready | ||
| // or when the timeout expires | ||
| // or when no buffering is required | ||
| const startDpQueueAndReplayEvents = () => { | ||
| this.isEventBufferingActive = false; | ||
| this.eventsBuffer.forEach(event => { | ||
| this.enqueue(event); | ||
| }); | ||
|
|
||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
|
|
||
| this.eventsBuffer = []; | ||
| }; | ||
|
|
||
| if ( | ||
| (hybridDestExist === false || shouldBufferDpEvents === false) && | ||
| !bufferEventsBeforeConsent && | ||
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true | ||
| ) { | ||
| let timeoutId: number; | ||
| // Start the queue when no event buffering is required | ||
| // or when buffering is required and the client destinations are ready | ||
| effect(() => { | ||
| if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) { | ||
| (globalThis as typeof window).clearTimeout(timeoutId); | ||
| this.dataplaneEventsQueue?.start(); | ||
| startDpQueueAndReplayEvents(); | ||
| } | ||
| }); | ||
|
|
||
| // Force start the data plane events queue processing after a timeout | ||
| if (state.loadOptions.value.bufferDataPlaneEventsUntilReady === true) { | ||
| if (shouldBufferEvents) { | ||
| this.isEventBufferingActive = true; | ||
| timeoutId = (globalThis as typeof window).setTimeout(() => { | ||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
| startDpQueueAndReplayEvents(); | ||
| }, state.loadOptions.value.dataPlaneEventsBufferTimeout); | ||
| } | ||
| } |
There was a problem hiding this comment.
The 'startDpEventsQueue' method creates a new effect and potentially a new timeout each time it's called. This method is called from both 'init' (line 104) and 'resume' (line 162). If 'resume' is called multiple times (e.g., through repeated consent updates), this could create multiple active effects listening to the same state changes and multiple timeouts trying to start the queue, leading to the queue being started multiple times and potential memory leaks from undisposed effects.
Consider adding guards to prevent multiple invocations, or disposing of previous effects/timeouts before creating new ones.
| const startDpQueueAndReplayEvents = () => { | ||
| this.isEventBufferingActive = false; | ||
| this.eventsBuffer.forEach(event => { | ||
| this.enqueue(event); | ||
| }); | ||
|
|
||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
|
|
||
| this.eventsBuffer = []; | ||
| }; | ||
|
|
||
| if ( | ||
| (hybridDestExist === false || shouldBufferDpEvents === false) && | ||
| !bufferEventsBeforeConsent && | ||
| this.dataplaneEventsQueue?.scheduleTimeoutActive !== true | ||
| ) { | ||
| let timeoutId: number; | ||
| // Start the queue when no event buffering is required | ||
| // or when buffering is required and the client destinations are ready | ||
| effect(() => { | ||
| if (!shouldBufferEvents || state.nativeDestinations.clientDestinationsReady.value) { | ||
| (globalThis as typeof window).clearTimeout(timeoutId); | ||
| this.dataplaneEventsQueue?.start(); | ||
| startDpQueueAndReplayEvents(); | ||
| } | ||
| }); | ||
|
|
||
| // Force start the data plane events queue processing after a timeout | ||
| if (state.loadOptions.value.bufferDataPlaneEventsUntilReady === true) { | ||
| if (shouldBufferEvents) { | ||
| this.isEventBufferingActive = true; | ||
| timeoutId = (globalThis as typeof window).setTimeout(() => { | ||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
| startDpQueueAndReplayEvents(); | ||
| }, state.loadOptions.value.dataPlaneEventsBufferTimeout); | ||
| } |
There was a problem hiding this comment.
The startDpQueueAndReplayEvents function could be invoked multiple times due to race conditions:
- When clientDestinationsReady becomes true (via the effect on line 138-141)
- When the timeout expires (line 147-149)
If both events occur nearly simultaneously, startDpQueueAndReplayEvents could be called twice. While the buffer is cleared after the first replay (line 131), setting isEventBufferingActive to false (line 122) happens before the buffer is cleared, creating a window where:
- New events coming in won't be buffered (since isEventBufferingActive is false)
- But they'll be enqueued directly to the queues
- The queue might start twice (line 128)
Additionally, if the destinations become ready just as the timeout fires, the timeout callback would still execute even though clearTimeout was called, because the clearTimeout happens in an effect which may execute asynchronously relative to the timeout firing.
Consider adding a guard flag to ensure startDpQueueAndReplayEvents only executes once, or clearing/canceling the timeout more reliably before starting the replay.
.github/workflows/deploy-beta.yml
Outdated
| if (false) { | ||
| // Check if PR is in draft state | ||
| if (pr.draft) { | ||
| core.setFailed(`PR #${prNumber} is in draft state. Please mark it as ready for review before deploying.`); | ||
| return; | ||
| } | ||
|
|
||
| if (prDetails.mergeable_state === 'blocked') { | ||
| reason = 'required checks or approvals are missing'; | ||
| } else if (prDetails.mergeable_state === 'dirty') { | ||
| reason = 'there are merge conflicts'; | ||
| } else if (prDetails.mergeable_state === 'unstable') { | ||
| reason = 'required checks are failing'; | ||
| } else { | ||
| reason = `the mergeable state is "${prDetails.mergeable_state}"`; | ||
| // Get detailed PR information including mergeable state | ||
| const { data: prDetails } = await github.rest.pulls.get({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| pull_number: prNumber | ||
| }); | ||
|
|
||
| // Check if PR is mergeable and all requirements are satisfied | ||
| if (prDetails.mergeable === false) { | ||
| core.setFailed(`PR #${prNumber} is not in a mergeable state. Please resolve conflicts before deploying.`); | ||
| return; | ||
| } | ||
|
|
||
| core.setFailed(`PR #${prNumber} is not ready to merge: ${reason}. Please resolve all issues before deploying.`); | ||
| return; | ||
| // The mergeable_state can be one of: clean, dirty, blocked, unstable, or unknown | ||
| // Only 'clean' means all requirements are met (checks passed, approvals received, no conflicts) | ||
| if (prDetails.mergeable_state !== 'clean') { | ||
| // Get more details about why it's not clean | ||
| let reason = ''; | ||
|
|
||
| if (prDetails.mergeable_state === 'blocked') { | ||
| reason = 'required checks or approvals are missing'; | ||
| } else if (prDetails.mergeable_state === 'dirty') { | ||
| reason = 'there are merge conflicts'; | ||
| } else if (prDetails.mergeable_state === 'unstable') { | ||
| reason = 'required checks are failing'; | ||
| } else { | ||
| reason = `the mergeable state is "${prDetails.mergeable_state}"`; | ||
| } | ||
|
|
||
| core.setFailed(`PR #${prNumber} is not ready to merge: ${reason}. Please resolve all issues before deploying.`); | ||
| return; | ||
| } | ||
| } |
There was a problem hiding this comment.
The PR mergeability validation logic has been wrapped in an 'if (false)' condition, making it dead code that will never execute. This bypasses important checks for PR draft state, merge conflicts, and required approvals before deploying to beta. While this might be intentional for testing purposes, it should either be removed entirely or controlled by a workflow input parameter rather than hardcoded.
If this is meant to be temporary, consider adding a TODO comment explaining why and when it should be re-enabled. Otherwise, remove the dead code entirely.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const bufferEventsBeforeConsent = shouldBufferEventsForPreConsent(state); | ||
| if (!bufferEventsBeforeConsent) { | ||
| this.startDpEventsQueue(); | ||
| } | ||
| } |
There was a problem hiding this comment.
The isEventBufferingActive flag is initialized to true in the constructor (line 58), which means events will be buffered by default even before init() is called. If any events are enqueued before init() completes, they will be buffered.
However, if shouldBufferEventsForPreConsent is true (line 102), startDpEventsQueue() is never called during init(), which means isEventBufferingActive remains true indefinitely until resume() is called. This creates an inconsistent state where:
- Events are buffered (isEventBufferingActive = true)
- But startDpEventsQueue() was never called, so no effect listeners or timeouts are set up to ever stop the buffering
- The buffer will never be replayed unless resume() is called
Consider initializing isEventBufferingActive to false and only setting it to true within startDpEventsQueue when buffering is actually needed.
| this.setActiveDestinations(); | ||
|
|
||
| // Initialize event manager | ||
| this.eventManager?.init(); |
There was a problem hiding this comment.
The setActiveDestinations call at line 306 is executed before eventManager.init() at line 309. This ensures that state.nativeDestinations.activeDestinations is populated before EventRepository.init() tries to check for hybrid destinations (in startDpEventsQueue at line 112-114).
However, this creates a potential issue: if the configuration hasn't been loaded yet (i.e., the destinations config isn't available), setActiveDestinations may set an empty or incomplete list of destinations. This could cause EventRepository to incorrectly determine whether hybrid destinations exist, leading to improper buffering behavior.
Verify that the configuration is fully loaded and available before calling setActiveDestinations in onPluginsReady, or add defensive checks in startDpEventsQueue to handle cases where activeDestinations might not be fully populated yet.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/analytics-js-integrations/src/integrations/GA4/browser.js (1)
132-145: Consider combining the duplicate conditional blocks.Both blocks branch on the same
this.overrideClientAndSessionIdcondition. They can be merged into a singleif/elsefor readability.♻️ Proposed refactor
- if (!this.overrideClientAndSessionId) { - window.gtag('get', this.measurementId, 'session_id', sessionId => { - this.sessionId = sessionId; - }); - } else { - this.sessionId = this.analytics.getSessionId(); - } - if (!this.overrideClientAndSessionId) { - window.gtag('get', this.measurementId, 'client_id', clientId => { - this.clientId = clientId; - }); - } else { - this.clientId = this.analytics.getAnonymousId(); - } + if (!this.overrideClientAndSessionId) { + window.gtag('get', this.measurementId, 'session_id', sessionId => { + this.sessionId = sessionId; + }); + window.gtag('get', this.measurementId, 'client_id', clientId => { + this.clientId = clientId; + }); + } else { + this.sessionId = this.analytics.getSessionId(); + this.clientId = this.analytics.getAnonymousId(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js` around lines 132 - 145, Both blocks duplicate the same if/else on this.overrideClientAndSessionId to set sessionId and clientId; refactor by merging into one conditional: if !this.overrideClientAndSessionId call window.gtag('get', this.measurementId, 'session_id', ...) to set this.sessionId and window.gtag('get', this.measurementId, 'client_id', ...) to set this.clientId, else set this.sessionId = this.analytics.getSessionId() and this.clientId = this.analytics.getAnonymousId(); keep the same measurementId and callback behavior for gtag calls to preserve existing async assignment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js`:
- Around line 161-170: Fix two issues in isLoaded(): replace the typo
this.getLoadedAt with this.gtagLoadedAt so the timestamp is actually set when
gtagReady first becomes true, and change the session check so it treats an
empty-string initial session as invalid (e.g., compute hasValidSession as
this.sessionNumber !== '' && this.sessionNumber !== null or use
Boolean(this.sessionNumber) if 0 is not a valid session). Update the code in the
isLoaded() method (symbols: isLoaded, gtagLoadedAt, sessionNumber,
LOAD_DELAY_MS) so the 2-second wait logic (LOAD_DELAY_MS) can take effect.
---
Nitpick comments:
In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js`:
- Around line 132-145: Both blocks duplicate the same if/else on
this.overrideClientAndSessionId to set sessionId and clientId; refactor by
merging into one conditional: if !this.overrideClientAndSessionId call
window.gtag('get', this.measurementId, 'session_id', ...) to set this.sessionId
and window.gtag('get', this.measurementId, 'client_id', ...) to set
this.clientId, else set this.sessionId = this.analytics.getSessionId() and
this.clientId = this.analytics.getAnonymousId(); keep the same measurementId and
callback behavior for gtag calls to preserve existing async assignment.
| isLoaded() { | ||
| return !!(this.sessionId && this.clientId); | ||
| const LOAD_DELAY_MS = 2000; | ||
| const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push); | ||
| if (gtagReady && this.getLoadedAt === null) { | ||
| this.gtagLoadedAt = Date.now(); | ||
| } | ||
| const hasWaitedLongEnough = | ||
| this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS; | ||
| const hasValidSession = this.sessionNumber !== null; | ||
| return gtagReady && (hasValidSession || hasWaitedLongEnough); |
There was a problem hiding this comment.
Two bugs make the 2-second delay logic completely inert.
Bug 1 (Line 164): this.getLoadedAt is a typo — should be this.gtagLoadedAt. this.getLoadedAt is undefined, and undefined === null is false under strict equality, so this.gtagLoadedAt is never set to Date.now(). The timestamp remains null forever, and hasWaitedLongEnough is always false.
Bug 2 (Line 169): this.sessionNumber is initialized to '' (line 31), not null. Since '' !== null is true, hasValidSession is always true from the first invocation — before gtag returns any real session number.
Combined effect: isLoaded() reduces to just gtagReady, completely bypassing the intended 2-second readiness delay.
🐛 Proposed fix
isLoaded() {
const LOAD_DELAY_MS = 2000;
const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push);
- if (gtagReady && this.getLoadedAt === null) {
+ if (gtagReady && this.gtagLoadedAt === null) {
this.gtagLoadedAt = Date.now();
}
const hasWaitedLongEnough =
this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS;
- const hasValidSession = this.sessionNumber !== null;
+ const hasValidSession = !!this.sessionNumber;
return gtagReady && (hasValidSession || hasWaitedLongEnough);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| isLoaded() { | |
| return !!(this.sessionId && this.clientId); | |
| const LOAD_DELAY_MS = 2000; | |
| const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push); | |
| if (gtagReady && this.getLoadedAt === null) { | |
| this.gtagLoadedAt = Date.now(); | |
| } | |
| const hasWaitedLongEnough = | |
| this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS; | |
| const hasValidSession = this.sessionNumber !== null; | |
| return gtagReady && (hasValidSession || hasWaitedLongEnough); | |
| isLoaded() { | |
| const LOAD_DELAY_MS = 2000; | |
| const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push); | |
| if (gtagReady && this.gtagLoadedAt === null) { | |
| this.gtagLoadedAt = Date.now(); | |
| } | |
| const hasWaitedLongEnough = | |
| this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS; | |
| const hasValidSession = !!this.sessionNumber; | |
| return gtagReady && (hasValidSession || hasWaitedLongEnough); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/analytics-js-integrations/src/integrations/GA4/browser.js` around
lines 161 - 170, Fix two issues in isLoaded(): replace the typo this.getLoadedAt
with this.gtagLoadedAt so the timestamp is actually set when gtagReady first
becomes true, and change the session check so it treats an empty-string initial
session as invalid (e.g., compute hasValidSession as this.sessionNumber !== ''
&& this.sessionNumber !== null or use Boolean(this.sessionNumber) if 0 is not a
valid session). Update the code in the isLoaded() method (symbols: isLoaded,
gtagLoadedAt, sessionNumber, LOAD_DELAY_MS) so the 2-second wait logic
(LOAD_DELAY_MS) can take effect.
…talayer (#2785) * feat: update loading function to wait for 2 sec and then check for datalayer * feat: update loading function * feat: update loading function to wait for sessionNumber or 2 sec after gtag loading
55eac77 to
4ae3336
Compare
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| state.consents.postConsent.value.discardPreConsentEvents | ||
| ) { | ||
| this.dataplaneEventsQueue?.clear(); | ||
| this.destinationsEventsQueue?.clear(); |
There was a problem hiding this comment.
If discardPreConsentEvents is true, resume() clears the underlying queues but does not clear eventsBuffer. Any pre-consent events stored in eventsBuffer will still be replayed when buffering is lifted, defeating the discard option. Clear eventsBuffer (and disable buffering) when discarding pre-consent events.
| this.destinationsEventsQueue?.clear(); | |
| this.destinationsEventsQueue?.clear(); | |
| this.eventsBuffer = []; | |
| this.isEventBufferingActive = false; |
| return !!(this.sessionId && this.clientId); | ||
| const LOAD_DELAY_MS = 2000; | ||
| const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push); | ||
| if (gtagReady && this.getLoadedAt === null) { |
There was a problem hiding this comment.
isLoaded() checks this.getLoadedAt === null, but the field introduced is gtagLoadedAt. As written, this.getLoadedAt is undefined, so gtagLoadedAt may never be set and the load-delay logic won’t behave as intended. Use the correct property name when initializing the timestamp.
| if (gtagReady && this.getLoadedAt === null) { | |
| if (gtagReady && this.gtagLoadedAt === null) { |
| const startDpQueueAndReplayEvents = () => { | ||
| this.isEventBufferingActive = false; | ||
| this.eventsBuffer.forEach(event => { | ||
| this.enqueue(event); | ||
| }); | ||
|
|
||
| if (this.dataplaneEventsQueue?.scheduleTimeoutActive !== true) { | ||
| this.dataplaneEventsQueue?.start(); | ||
| } | ||
|
|
||
| this.eventsBuffer = []; |
There was a problem hiding this comment.
This change introduces a new buffering/replay path (eventsBuffer, isEventBufferingActive, startDpQueueAndReplayEvents) plus new discard behavior interactions. Existing EventRepository tests don’t cover buffering an event, lifting buffering, and asserting (a) dataplane enqueue happens once, (b) destinations enqueue behavior is preserved, and (c) discardPreConsentEvents clears the buffer. Adding unit tests for these cases would prevent regressions.
| /** | ||
| * If the gtag is successfully initialized, client ID and session ID fields will have valid values for the given GA4 configuration | ||
| * If the gtag is successfully initialized, client ID and session ID fields will have valid values for the given GA4 configuration. | ||
| * Returns true only after 2 seconds from init() to allow gtag to fully initialize. | ||
| */ | ||
| isLoaded() { | ||
| return !!(this.sessionId && this.clientId); | ||
| const LOAD_DELAY_MS = 2000; | ||
| const gtagReady = !!(window.dataLayer && window.dataLayer.push !== Array.prototype.push); | ||
| if (gtagReady && this.getLoadedAt === null) { | ||
| this.gtagLoadedAt = Date.now(); | ||
| } | ||
| const hasWaitedLongEnough = | ||
| this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS; | ||
| const hasValidSession = this.sessionNumber !== null; | ||
| return gtagReady && (hasValidSession || hasWaitedLongEnough); | ||
| } |
There was a problem hiding this comment.
isLoaded() now has timing-based readiness logic (gtagLoadedAt + 2s delay) and different readiness criteria than before. There are GA4 integration tests, but none appear to assert the new isLoaded() behavior. Adding unit tests around the delay/sessionNumber conditions would help catch regressions in destination readiness detection.
| const hybridDestExist = state.nativeDestinations.activeDestinations.value.some( | ||
| (dest: Destination) => isHybridModeDestination(dest), | ||
| ); | ||
| const shouldBufferEvents = bufferEventsUntilReady && hybridDestExist; | ||
|
|
There was a problem hiding this comment.
shouldBufferEvents is computed once from the current activeDestinations snapshot. If activeDestinations changes later (e.g., consent updates / setActiveDestinations runs again), the buffering decision won’t update because the effect only watches clientDestinationsReady. Consider computing shouldBufferEvents inside the effect (reading activeDestinations.value) or making buffering activation reactive to destination list changes.
| this.startDpEventsQueue(); | ||
| } |
There was a problem hiding this comment.
resume() calls startDpEventsQueue(), which registers a new effect() and (possibly) a new timeout on every resume. If resume() can be invoked multiple times (e.g., multiple consent updates), this can lead to duplicated effects/timeouts and repeated queue starts. Consider making startDpEventsQueue() idempotent (store/dispose the effect) or starting the queue directly in resume() when appropriate.
| const hasWaitedLongEnough = | ||
| this.gtagLoadedAt !== null && Date.now() - this.gtagLoadedAt >= LOAD_DELAY_MS; | ||
| const hasValidSession = this.sessionNumber !== null; | ||
| return gtagReady && (hasValidSession || hasWaitedLongEnough); |
There was a problem hiding this comment.
hasValidSession uses this.sessionNumber !== null, but sessionNumber is initialized to an empty string in the constructor, so this condition is always true. This makes isLoaded() return true as soon as gtagReady is true, bypassing the intended readiness/delay behavior. Consider initializing sessionNumber to null (or checking for a non-empty value) and aligning the validity check accordingly.
|
Hello! This PR has been open for 20 days without any activity. Therefore, it's considered as stale and is scheduled to be closed in 10 days. If you're still working on this, please remove the 'Stale' label or add a comment to keep it open. Thanks for your contribution! |




PR Description
Buffering the data plane events conditionally per the load API options.
Linear task (optional)
https://linear.app/rudderstack/issue/SDK-2862/buffer-data-plane-events-until-ready
Cross Browser Tests
Please confirm you have tested for the following browsers:
Sanity Suite
Security
Summary by CodeRabbit
New Features
Refactor