Skip to content

fix: trigger area was reporting the wrong entity#7835

Merged
alejandro-jimenez-dcl merged 3 commits intodevfrom
fix/7662-trigger-areas-multiple-players
Apr 2, 2026
Merged

fix: trigger area was reporting the wrong entity#7835
alejandro-jimenez-dcl merged 3 commits intodevfrom
fix/7662-trigger-areas-multiple-players

Conversation

@alejandro-jimenez-dcl
Copy link
Copy Markdown
Contributor

@alejandro-jimenez-dcl alejandro-jimenez-dcl commented Mar 26, 2026

Pull Request Description

What does this PR change?

Fixes a use-after-pool-release bug in TriggerAreaHandlerSystem that caused trigger area events to report incorrect entity IDs when multiple players were inside the same trigger area. Addresses #7662.

Root cause: PropagateResultComponent pools and reuses PBTriggerAreaResult.Types.Trigger objects. The prepareMessage callback passed to AppendMessage copies a reference to the pooled Trigger object into the CRDT message. The Trigger was then immediately
released back to the pool after each event. When processing multiple players in the same frame, the next event would reuse the same Trigger object from the pool, overwriting the previous CRDT message's Trigger.Entity field (e.g., remote player ID 32 gets
overwritten with local player ID 1). By the time CRDT serialization runs, all pending messages point to the same Trigger instance holding only the last-written entity ID.

Fix: Deferred pool release — Trigger and PBTriggerAreaResult objects are now enqueued and only released after all trigger area queries have completed. This ensures each AppendMessage call references a unique Trigger object that remains intact until CRDT
serialization. The pools are now created and owned by the system itself (not injected), with proper disposal in OnDispose().

Test Instructions

Quality Checklist

  • Changes have been tested locally
  • Documentation has been updated (if required)
  • Performance impact has been considered
  • For SDK features: Test scene is included

Code Review Reference

Please review our Code Review Standards before submitting.

…pool local to avoid usage outside of the system
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 26, 2026

Copy link
Copy Markdown
Collaborator

@popuz popuz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

elegant!

Copy link
Copy Markdown

@decentraland-bot decentraland-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

Verdict: Approved

This PR correctly fixes a use-after-pool-release bug where pooled Trigger objects were released immediately after AppendMessage, causing later CRDT serialization to read stale/overwritten data when multiple players shared the same trigger area in the same frame.

What changed

  1. Deferred pool releasePropagateResultComponent now enqueues used (PBTriggerAreaResult, Trigger) pairs into a Queue instead of releasing them immediately. The queue is drained at the end of Update(), after all queries have completed and CRDT messages have been serialized.

  2. System-owned pools — The triggerAreaResultPool and triggerAreaResultTriggerPool are now created internally via ComponentPool.WithDefaultCtor<T>() instead of being injected. This simplifies the dependency chain (removed from plugin constructor and StaticContainer wiring) and ensures proper disposal in OnDispose().

  3. Test cleanup — Removed the now-unnecessary pool mocks from TriggerAreaHandlerSystemShould, keeping tests aligned with the new internal pool ownership.

Analysis

  • Correctness: The fix is sound. By deferring release until after all queries complete, each AppendMessage callback retains a unique, stable reference to its Trigger object throughout the frame.
  • Memory: The queue is a single Queue allocation (not per-frame), and items are dequeued/released every frame — no accumulation risk. Good.
  • ECS patterns: Structural changes (none here) are not affected. The pool release happens outside query iteration, which is correct.
  • Disposal: OnDispose() properly disposes both pools. ✔️
  • Naming/style: Follows project conventions (PascalCase methods, camelCase fields).

CI Status

  • ✅ playmode tests pass
  • ❌ editmode tests timed out (80-minute CI timeout — not related to this PR)
  • ❌ enforce-approvals is expected to fail until approvals are in place

Clean fix. LGTM.

Requested by Alejandro Jimenez via Slack

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 30, 2026

Claude finished @alejandro-jimenez-dcl's task in 3m 59s —— View job


Code Review

  • Gather context
  • Read changed files
  • Analyze code
  • Post review

No issues found. The fix is correct and clean.

Root cause confirmed: AppendMessage in OutgoingCRDTMessagesProvider rents a fresh PBTriggerAreaResult per call and invokes the lambda synchronously, then stores the pending message. The original bug occurred because a single pooled Trigger object was being referenced by multiple pending messages, then released and overwritten before serialization.

Fix assessment: The ResultData value-type struct approach is the right call over the deferred-queue approach (the first commit). Each AppendMessage call gets its own rented PBTriggerAreaResult, the lambda receives data as a struct copy, and the ref-typed Decentraland.Common.Vector3/Quaternion fields are freshly allocated per call via ToProtoVector()/ToProtoQuaternion(), so there is no aliasing at any level.

Structural change ordering in HandleComponentRemoval is correct — the ref SDKEntityTriggerAreaComponent is fully consumed inside ProcessOnExitTriggerArea before World.Remove<TriggerAreaComponent> invalidates it.

Pool/container wiring cleanup is complete: StaticContainer.cs no longer passes the Trigger pool to the plugin, and SDKEntityTriggerAreaPlugin no longer injects it into the system. OnDispose() is correctly left as a no-op since there are no system-owned pools anymore.

REVIEW_RESULT: PASS ✅
COMPLEXITY: COMPLEX

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 30, 2026

Claude finished @alejandro-jimenez-dcl's task in 1m 58s —— View job


Code Review

  • Gather context
  • Read changed files
  • Analyze code
  • Post review

@github-actions
Copy link
Copy Markdown
Contributor

🔍 Claude reviewed this PR and found no blocking issues, but assessed it as complex — human DEV review is still required before merging.

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

🔍 Claude reviewed this PR and found no blocking issues, but assessed it as complex — human DEV review is still required before merging.

Copy link
Copy Markdown
Member

@pravusjif pravusjif left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The struct approach seems a lot more maintainble to me, nice job 👍

@AnsisMalins
Copy link
Copy Markdown
Collaborator

PR 7835

PR number: 7835
Windows change build
Windows baseline build
macOS change build
macOS baseline build

Intel Core i3

Change samples: 2672
Baseline samples: 2668

Metric Change Baseline Delta Improvement
CPU average 33.6 ms 33.7 ms -0.1 ms 0.2%
CPU 1% worst 55.6 ms 59.3 ms -3.7 ms 6.2% 🟢
CPU 0.1% worst 120.3 ms 129.4 ms -9.0 ms 7.0% 🟢
GPU average 16.2 ms 15.6 ms 0.6 ms -3.6% 🔴
GPU 1% worst 21.7 ms 21.4 ms 0.3 ms -1.2%
GPU 0.1% worst 28.3 ms 25.6 ms 2.7 ms -10.7% 🔴

@balducciv
Copy link
Copy Markdown

balducciv commented Apr 1, 2026

PR reviewed and approved by QA on both platforms following instructions. ✅
Smoke test for this PR was performed on Windows and Mac to verify that the normal flow is working as expected ✅

Tested in maximocossetti world

  • Connect to a scene with a trigger area that applies per-player effects and check entering and exiting the area.

Scenarios:

  • Scenario 1

Player B enters, gets the cone, and leaves the trigger area; the cone disappears.
Player A enters, gets the cone, and leaves the trigger area; the cone disappears.

  • Scenario 2

Player B enters the trigger area, gets the cone.
Player A enters with Player B. For each client, both players A and B see the cone on their end, not on other users' ends.
Player B leaves the trigger area and loses the cone. Player A, still in the trigger area, keeps the cone.

  • Scenario 3

Player A enters the trigger area, gets the cone.
Player B enters with Player A. For each client, both players A and B see the cone on their end, not on other users' ends.
Player A leaves the trigger area and loses the cone. Player B, still in the trigger area, keeps the cone.

Windows: Player A

PR-7835.player.A.windows.mp4

Mac: Player B

PR-7835.player.B.-.Mac.mp4

@alejandro-jimenez-dcl alejandro-jimenez-dcl merged commit 95f0043 into dev Apr 2, 2026
14 of 17 checks passed
@alejandro-jimenez-dcl alejandro-jimenez-dcl deleted the fix/7662-trigger-areas-multiple-players branch April 2, 2026 06:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants