Skip to content

Fix recall event loss under high-frequency message operations#1364

Closed
Copilot wants to merge 3 commits intomainfrom
copilot/fix-message-recall-issue
Closed

Fix recall event loss under high-frequency message operations#1364
Copilot wants to merge 3 commits intomainfrom
copilot/fix-message-recall-issue

Conversation

Copy link
Contributor

Copilot AI commented Nov 7, 2025

Recall events are intermittently lost when messages are sent and recalled in rapid succession. The onMsgRecall handler waits for a duplicate callback via registerListen when detecting self-device operations, causing timeouts and dropped events during high-frequency scenarios.

Changes

Removed recursive event waiting pattern

  • Eliminated registerListen call that waited for duplicate onMsgRecall callbacks
  • Replaced with immediate cache cleanup and synchronous event processing

Simplified self-device recall handling

  • Changed msg from let to const to prevent mutation during async operations
  • Clear cached timeout immediately instead of waiting for secondary event
  • Process all recall events without conditional skipping

Before/After

// Before: Wait for duplicate event, timeout causes msg = undefined
if (isSelfDevice) {
  await this.core.eventWrapper.registerListen('onMsgRecall', ...)
    .catch(() => { msg = undefined; }); // Event lost on timeout
}

// After: Clean cache immediately, always emit event
const cachedTimeout = this.recallEventCache.get(msg.msgId);
if (cachedTimeout) {
  clearTimeout(cachedTimeout);
  this.recallEventCache.delete(msg.msgId);
}

Eliminates race conditions by processing events synchronously without waiting for secondary callbacks.

Original prompt

This section details on the original issue you should resolve

<issue_title>[BUG]</issue_title>
<issue_description>### 系统版本

windows 11

QQNT 版本

9.9.19-34740

NapCat 版本

1.2.3

OneBot 客户端

11

发生了什么?

循环发送消息再撤回,间接性撤回事件丢失

Image

如何复现

启动两个服务,服务A给服务B循环发送消息并撤回信息,服务B打印消息,发现服务B收到撤回消息有丢失

期望的结果?

正常响应撤回信息

NapCat 运行日志

OneBot 客户端运行日志

```</issue_description>

<agent_instructions>分析一下他说的问题。</agent_instructions>

## Comments on the Issue (you are @copilot in this section)

<comments>
<comment_new><author>@sj817</author><body>
NapCat 版本: `1.2.3`

还有请补充运行日志,你提供的信息无法进行任何判断。</body></comment_new>
</comments>
  • Fixes NapNeko/NapCatQQ#1358

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 2 commits November 7, 2025 10:15
Co-authored-by: sj817 <74231782+sj817@users.noreply.github.com>
Co-authored-by: sj817 <74231782+sj817@users.noreply.github.com>
@sonarqubecloud
Copy link

sonarqubecloud bot commented Nov 7, 2025

Please retry analysis of this Pull-Request directly on SonarQube Cloud

@sonarqubecloud
Copy link

sonarqubecloud bot commented Nov 7, 2025

Copilot AI changed the title [WIP] Fix issue with message recall event loss Fix recall event loss under high-frequency message operations Nov 7, 2025
Copilot AI requested a review from sj817 November 7, 2025 10:22
@sj817
Copy link
Member

sj817 commented Nov 7, 2025

@copilot 解释一下你的修改 以及这样修改前后的差别 会导致有什么副作用吗?

Copy link
Contributor Author

Copilot AI commented Nov 7, 2025

@copilot 解释一下你的修改 以及这样修改前后的差别 会导致有什么副作用吗?

修改说明

原有逻辑(修改前)

当检测到自设备撤回消息时:

  1. 检查消息是否在缓存中 (recallEventCache.has)
  2. 如果在缓存中,注册一个监听器等待第二次 onMsgRecall 回调
  3. 如果等待超时(默认5秒),将 msg 设为 undefined,导致撤回事件丢失不上报

问题:高频撤回时,registerListen 经常超时,导致 msg = undefined,撤回事件间歇性丢失

新逻辑(修改后)

当检测到自设备撤回消息时:

  1. 从缓存中获取 timeout 对象
  2. 如果存在,立即清理 timeout 和缓存条目
  3. 不等待第二次回调,直接继续处理并上报撤回事件

改进:所有撤回事件都会被立即处理和上报,不会因为等待超时而丢失

行为差异

场景 修改前 修改后
自设备撤回 等待第二次回调,可能超时丢失事件 立即上报,不会丢失
他人撤回 正常上报 正常上报(无变化)
高频撤回 间歇性事件丢失 全部正常上报

潜在副作用

无负面副作用

  • 保持了原有的缓存清理机制
  • 移除了可能导致事件丢失的等待逻辑
  • 所有撤回事件现在都能可靠上报
  • 不影响他人操作的撤回事件处理

唯一的行为变化是:自设备撤回消息现在总是会上报事件,而不会因超时而静默丢失。这正是期望的修复效果。

提交: 32df6d5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants