You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
🤖 Fix auto-compact-continue race by tracking message IDs (#337)
## Problem
After running `/compact -c "continue message"`, the continue message is
sometimes sent **multiple times in a row** instead of just once.
## Root Cause
The previous workspace-level guard had a race condition:
1. **Backend sends two events** for `replaceChatHistory`: delete event,
then new message event
2. **Each event causes immediate synchronous `bump()`** which notifies
all subscribers
3. **Multiple `checkAutoCompact()` calls can be in-flight
simultaneously**:
- Delete event → `bump()` → subscriber fires → `checkAutoCompact()`
starts
- New message event → `bump()` → subscriber fires → `checkAutoCompact()`
starts
- Both calls check workspace guard **before either sets it** → both
proceed to send
The double-check guard in PR #334 helped but didn't fully solve it
because the checks and sets are separate operations.
**The real issue: Workspace-level tracking is the wrong granularity.**
We need to prevent processing the same compaction MESSAGE multiple
times, not the same workspace multiple times. A new compaction creates a
new message with a new ID.
## Solution
Track processed **message IDs** instead of workspace IDs:
```typescript
// Track which specific compaction summary messages we've already processed
const processedMessageIds = useRef<Set<string>>(new Set());
// In checkAutoCompact:
const messageId = summaryMessage.id;
// Have we already processed this specific compaction message?
if (processedMessageIds.current.has(messageId)) continue;
// Mark THIS MESSAGE as processed
processedMessageIds.current.add(messageId);
```
## Why This Is Obviously Correct
1. **Message IDs are unique and immutable** - Once a message exists, its
ID never changes
2. **We only care about processing each message once** - Not about
processing each workspace once
3. **The guard is set BEFORE the async operation** - No timing window
4. **Multiple calls can overlap** - But they all see the same message
ID, so only the first one proceeds
5. **Cleanup is natural** - IDs accumulate bounded (one per compaction)
and don't need explicit cleanup
The correctness is self-evident: "Have we sent a continue message for
THIS compaction result message? Yes/No."
## How It Fixes The Race
**Before (workspace-level):**
- Call #1 checks `firedForWorkspace.has(workspaceId)` → false
- Call #2 checks `firedForWorkspace.has(workspaceId)` → false (still!)
- Call #1 sets guard and sends
- Call #2 double-checks... but timing window existed
**After (message-level):**
- Call #1 checks `processedMessageIds.has(messageId)` → false
- Call #2 checks `processedMessageIds.has(messageId)` → false (same
message)
- Call #1 adds messageId to set
- Call #2 sees messageId already in set → skips
Both calls are checking the **same unique identifier** (the message ID),
so the guard works correctly even with concurrent execution.
## Testing
Manual testing needed:
1. Run `/compact -c "continue message"` multiple times
2. Verify only ONE continue message is sent per compaction
3. Check console logs for single "Sending continue message" per
compaction
4. Verify backend receives only one user message (not duplicates)
_Generated with `cmux`_
0 commit comments