Skip to content

✨ feat(botidentity): 新增机器人身份管理模块#51

Merged
kevinmatthe merged 1 commit intomasterfrom
feature-agentic
Mar 26, 2026
Merged

✨ feat(botidentity): 新增机器人身份管理模块#51
kevinmatthe merged 1 commit intomasterfrom
feature-agentic

Conversation

@kevinmatthe
Copy link
Member

@kevinmatthe kevinmatthe commented Mar 26, 2026

  • 实现机器人身份信息的飞书拉取、缓存和降级逻辑
  • 提供prompt身份注入能力,帮助LLM明确自身角色边界
  • 统一历史消息中机器人自身发言的标记逻辑,统一显示为「你」
  • 修复消息记录时机器人openID未正确持久化的问题

✨ feat(history): 升级历史搜索能力

  • 新增按用户名、消息类型过滤的查询维度
  • 优化@提及替换逻辑,自动将机器人自身的@标记为「你」
  • 新增search_history工具,支持LLM运行时按需检索会话历史
  • 移除对话流程默认前置召回,改为LLM按需调用,降低无效检索开销

✨ feat(intent): 扩展意图识别维度

  • 新增回复模式、用户意愿度、打断风险等识别字段
  • 新增是否需要历史/联网检索的预判断能力
  • 自动将@、私聊、回复机器人的场景标记为直达回复,绕过被动频控
  • 优化意图结果校验与归一化逻辑,提升识别结果可靠性

✨ feat(ratelimit): 优化智能频控策略

  • 区分硬流量预算和软负载阈值,直达消息不占用被动回复配额
  • 基于回复置信度、用户意愿、打断风险计算综合回复评分,动态调整阈值
  • 不同回复模式对应不同频控权重,降低主动插话的打扰概率

✨ feat(prompt): 统一优化全链路回复风格

  • 新增语气词限制规则,禁止使用「哟」「呀」「啦」等口头禅,降低拟人感
  • 统一@使用规范,仅在需要特定成员响应时才@,禁止无意义乱@
  • 所有对话prompt统一注入机器人身份信息,避免角色认知错误

♻️ refactor(chatflow): 重构标准对话流程

  • 移除数据库prompt模板依赖,改为代码内统一维护
  • 拆分direct/ambient两种回复模式的prompt逻辑,适配不同场景
  • 移除默认前置上下文召回,改为LLM按需调用搜索工具获取信息

✅ test: 新增核心功能单元测试覆盖

  • 覆盖botidentity、历史搜索、意图识别、频控等模块核心逻辑
  • 新增prompt约束、工具参数校验等场景的测试用例

Sourcery 总结

将机器人身份处理、历史搜索、意图驱动回复和限流机制统一到一套一致的提示词与工具流水线中,使机器人的自我身份显式化、提升目标化历史检索能力,并在标准会话和 Agent 式会话中统一对话行为与 @提及 的使用方式。

新功能:

  • 引入机器人身份模块,用于获取、缓存并在提示词和历史处理过程中注入当前机器人的档案信息。
  • 扩展历史搜索能力,支持结构化过滤(按用户、用户名、消息类型和时间进行会话范围内的查询),并通过 search_history 工具对外提供。

缺陷修复:

  • 在消息记录中持久化机器人的真实 openID,同时在历史视图和提及中仍将其显示名规范化为“你”。

改进与增强:

  • 重构标准会话和 Agent 式会话流程,统一使用代码定义的 system/user 提示词,并显式包含机器人身份、JSON 输出约定,以及区分直连模式与环境/旁观模式,同时裁剪不必要的预召回。
  • 收紧意图识别输出内容,增加回复模式、用户意愿、打断风险和检索提示等信号,并将其输入限流逻辑以做出更精细的回复决策。
  • 优化智能限流机制,对不同触发类型设置独立的硬/软负载权重,使直接提及可以绕过被动预算,但仍计入总体负载。
  • 统一各类提示词中的回复风格指引,减少拟人化口头语,并约束 @提及 的使用场景。
  • 规范历史记录中的提及和搜索结果,使机器人自己的消息与 @提及 始终一致地表现为“你”。
  • 改进工具规范(send_message, search_history),更清晰地记录提及用法和历史过滤器说明。

测试:

  • 添加大量单元测试,覆盖标准/Agent 式会话提示词、机器人身份解析与缓存、历史搜索查询构建、提及规范化、意图解析与清洗,以及限流决策等。
Original summary in English

Summary by Sourcery

Unify bot identity handling, history search, intent-driven reply and rate limiting into a consistent prompt and tooling pipeline that makes the bot’s self-identity explicit, improves targeted history retrieval, and standardizes conversational behavior and @mention usage across standard and agentic chat flows.

New Features:

  • Introduce a bot identity module to fetch, cache, and inject the current bot profile into prompts and history processing.
  • Extend history search with structured filters (chat-scoped queries by user, username, message type, and time) and expose them via the search_history tool.

Bug Fixes:

  • Persist the bot’s actual openID in message records while still normalizing its display name to “你” in history views and mentions.

Enhancements:

  • Refactor standard and agentic chat flows to use code-defined system/user prompts with explicit bot identity, JSON-output contracts, and separate direct vs ambient modes while trimming unnecessary pre-recall.
  • Tighten intent recognition to output reply mode, user willingness, interrupt risk, and retrieval hints, and feed these into rate limiting for more nuanced reply decisions.
  • Refine smart rate limiting with distinct hard vs soft load weights per trigger type so direct mentions bypass passive budgets while still contributing to load.
  • Unify reply style guidance across prompts to reduce anthropomorphic filler words and constrain when @mentions are used.
  • Normalize history mentions and search results so the bot’s own messages and @mentions are consistently represented as “你”.
  • Improve tool specifications (send_message, search_history) to document mention usage and history filters more clearly.

Tests:

  • Add extensive unit tests for standard/agentic chat prompts, bot identity resolution and caching, history search query building, mention normalization, intent parsing and sanitization, and rate limiting decisions.

- 实现机器人身份信息的飞书拉取、缓存和降级逻辑
- 提供prompt身份注入能力,帮助LLM明确自身角色边界
- 统一历史消息中机器人自身发言的标记逻辑,统一显示为「你」
- 修复消息记录时机器人openID未正确持久化的问题

✨ feat(history): 升级历史搜索能力
- 新增按用户名、消息类型过滤的查询维度
- 优化@提及替换逻辑,自动将机器人自身的@标记为「你」
- 新增search_history工具,支持LLM运行时按需检索会话历史
- 移除对话流程默认前置召回,改为LLM按需调用,降低无效检索开销

✨ feat(intent): 扩展意图识别维度
- 新增回复模式、用户意愿度、打断风险等识别字段
- 新增是否需要历史/联网检索的预判断能力
- 自动将@、私聊、回复机器人的场景标记为直达回复,绕过被动频控
- 优化意图结果校验与归一化逻辑,提升识别结果可靠性

✨ feat(ratelimit): 优化智能频控策略
- 区分硬流量预算和软负载阈值,直达消息不占用被动回复配额
- 基于回复置信度、用户意愿、打断风险计算综合回复评分,动态调整阈值
- 不同回复模式对应不同频控权重,降低主动插话的打扰概率

✨ feat(prompt): 统一优化全链路回复风格
- 新增语气词限制规则,禁止使用「哟」「呀」「啦」等口头禅,降低拟人感
- 统一@使用规范,仅在需要特定成员响应时才@,禁止无意义乱@
- 所有对话prompt统一注入机器人身份信息,避免角色认知错误

♻️ refactor(chatflow): 重构标准对话流程
- 移除数据库prompt模板依赖,改为代码内统一维护
- 拆分direct/ambient两种回复模式的prompt逻辑,适配不同场景
- 移除默认前置上下文召回,改为LLM按需调用搜索工具获取信息

✅ test: 新增核心功能单元测试覆盖
- 覆盖botidentity、历史搜索、意图识别、频控等模块核心逻辑
- 新增prompt约束、工具参数校验等场景的测试用例
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@sourcery-ai
Copy link

sourcery-ai bot commented Mar 26, 2026

Reviewer's Guide

重构标准对话和 Agentic 对话流程,使其使用代码定义的提示词并显式声明机器人身份;增强历史搜索和提及归一化;扩展意图分析元数据并将其与更智能的限流器集成;修复机器人 OpenID 在消息历史中的持久化问题;并围绕这些行为和新工具补充有针对性的测试。

基于意图驱动回复且带智能限流的时序图

sequenceDiagram
    actor User
    participant Lark as LarkServer
    participant ChatOp as ChatMsgOperator
    participant IntentOp as IntentRecognizeOperator
    participant Decider as RateLimitDecider
    participant Limiter as SmartRateLimiter
    participant ChatFlow as ChatFlowExecutor

    User->>Lark: Send message
    Lark->>ChatOp: Deliver P2MessageReceiveV1

    ChatOp->>IntentOp: Fetch intent analysis
    IntentOp-->>ChatOp: IntentAnalysis analysis

    ChatOp->>Decider: DecideIntentReply(ctx, chatID, openID, analysis)
    Decider->>IntentAnalysis: Sanitize()

    alt Direct or mention scenario
        Decider->>Decider: triggerTypeFromIntentAnalysis
        Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeMention}
    else Passive or random scenario
        Decider->>Limiter: Allow(ctx, chatID, triggerType)
        Limiter-->>Decider: allowed, reason
        alt Allowed and score >= threshold
            Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
        else Allowed but score < threshold
            Decider->>Limiter: GetStats(ctx, chatID)
            Limiter-->>Decider: *ChatStats
            Decider->>Decider: calculateIntentReplyScore
            Decider->>Decider: adjustIntentFallbackRate
            Decider->>Decider: randomFloat64()
            alt random pass
                Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
            else random reject
                Decider-->>ChatOp: Decision{Allowed:false, TriggerTypeIntent}
            end
        end
    end

    alt Decision.Allowed == true
        ChatOp->>Decider: RecordReply(ctx, chatID, decision.TriggerType)
        Decider->>Limiter: RecordSend
        Limiter-->>Decider: ok
        ChatOp->>ChatFlow: Run standard or agentic chat
        ChatFlow-->>User: Bot reply message
    else Decision.Allowed == false
        ChatOp-->>Lark: No reply (return)
    end
Loading

使用 search_history 工具的 LLM 时序图

sequenceDiagram
    participant LLM as LLMModel
    participant Ark as ResponsesImpl
    participant Tool as search_history_handler
    participant History as HistorySearcher
    participant OS as OpenSearch

    LLM->>Ark: Request completion with tools
    Ark->>LLM: Tool call request(search_history, args)
    LLM-->>Ark: Tool invocation arguments

    Ark->>Tool: Handle(ctx, event, metaData, HistorySearchArgs)
    Tool->>History: HybridSearch(ctx, HybridSearchRequest, EmbeddingFunc)
    History->>History: jieba tokenization, embedding calls
    History->>OS: SearchData(index, query)
    OS-->>History: Search hits
    History->>History: normalizeSearchResultActor, ReplaceMentionToName
    History-->>Tool: []*SearchResult

    Tool->>metaData: SetExtra(search_result, json)
    Tool-->>Ark: success

    Ark-->>LLM: Tool result(json)
    LLM->>Ark: Final reply
    Ark-->>LLM: Completion
Loading

botidentity Profile 及提示词身份注入的类图

classDiagram
    class Identity {
        +string AppID
        +string BotOpenID
        +bool Valid()
        +string NamespaceKey(prefix string, key string)
    }

    class Profile {
        +string AppID
        +string BotOpenID
        +string BotName
    }

    class botidentity_package {
        +Identity Current()
        +Profile CurrentProfile(ctx context.Context)
        +[]string PromptIdentityLines(profile Profile)
        +Profile getProfileCache(ctx context.Context, identity Identity)
        +Profile resolveProfile(ctx context.Context, identity Identity)
        +string profileCacheKey(identity Identity)
        +Profile fallbackProfile(identity Identity)
        +Profile normalizeProfile(identity Identity, profile Profile)
        +Profile mergeProfile(base Profile, override Profile)
        +string configuredBotName()
        +Profile loadProfileFromLark(ctx context.Context, identity Identity)
        +string applicationName(app *larkapplication.Application)
        +string applicationI18nName(items []*larkapplication.AppI18nInfo, wantKey string)
        +string pointerString(value *string)
        +string firstNonEmpty(values ...string)
        <<package>>
        -func profileLoader(ctx context.Context, identity Identity) (Profile, error)
    }

    class cache_package {
        +T GetOrExecute(ctx context.Context, key string, fn func() (T, error))
        <<external>>
    }

    class infraConfig_package {
        +*Config Get()
        <<external>>
    }

    class Config {
        +*BaseInfo BaseInfo
    }

    class BaseInfo {
        +string RobotName
    }

    class lark_dal_package {
        +Client Client
        <<external>>
    }

    class Client {
        +*ApplicationService Application
    }

    class ApplicationService {
        +*ApplicationV6 V6
    }

    class ApplicationV6 {
        +*ApplicationResource Application
    }

    class ApplicationResource {
        +Get(ctx context.Context, req *larkapplication.GetApplicationReq) (*larkapplication.GetApplicationResp, error)
    }

    class ApplicationResp {
        +bool Success()
        +error Error()
        +*ApplicationData Data
    }

    class ApplicationData {
        +*larkapplication.Application App
    }

    Identity <.. botidentity_package : uses
    Profile <.. botidentity_package : constructs
    botidentity_package --> cache_package : uses cache
    botidentity_package --> infraConfig_package : reads config
    botidentity_package --> lark_dal_package : calls Client
    lark_dal_package --> Client
    Client --> ApplicationService
    ApplicationService --> ApplicationV6
    ApplicationV6 --> ApplicationResource
    ApplicationResource --> ApplicationResp
    ApplicationResp --> ApplicationData
    ApplicationData --> larkapplication.Application

    botidentity_package ..> logs : logging
    botidentity_package ..> context : ctx

    %% Prompt usage in other modules
    class StandardChatPromptBuilder {
        +string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
    }

    class AgenticChatPromptBuilder {
        +string buildAgenticChatUserPrompt(ctx agenticChatPromptContext)
    }

    class ContinuationReplyPromptBuilder {
        +string buildContinuationReplyTurnUserPrompt(req ContinuationReplyTurnRequest, selfProfile botidentity.Profile)
    }

    StandardChatPromptBuilder ..> Profile : embeds identity lines
    AgenticChatPromptBuilder ..> Profile : embeds identity lines
    ContinuationReplyPromptBuilder ..> Profile : embeds identity lines
    StandardChatPromptBuilder ..> botidentity_package : PromptIdentityLines
    AgenticChatPromptBuilder ..> botidentity_package : PromptIdentityLines
    ContinuationReplyPromptBuilder ..> botidentity_package : PromptIdentityLines
Loading

意图分析与智能限流的类图

classDiagram
    class ReplyMode {
        <<enum>>
        +ReplyModeDirect
        +ReplyModePassiveReply
        +ReplyModeActiveInterject
        +ReplyModeIgnore
    }

    class SuggestAction {
        <<enum>>
        +SuggestActionChat
        +SuggestActionReact
        +SuggestActionRepeat
        +SuggestActionIgnore
    }

    class InteractionMode {
        <<enum>>
        +InteractionModeStandard
        +InteractionModeAgentic
    }

    class IntentType {
        <<enum>>
        +IntentTypeQuestion
        +IntentTypeIgnore
        .. other values ..
    }

    class IntentAnalysis {
        +string IntentType
        +bool NeedReply
        +int ReplyConfidence
        +string Reason
        +SuggestAction SuggestAction
        +InteractionMode InteractionMode
        +ReplyMode ReplyMode
        +int UserWillingness
        +int InterruptRisk
        +bool NeedsHistory
        +bool NeedsWeb
        +responses.ReasoningEffort_Enum ReasoningEffort
        +UnmarshalJSON(data []byte) error
        +Sanitize()
    }

    class TriggerType {
        <<enum>>
        +TriggerTypeIntent
        +TriggerTypeRandom
        +TriggerTypeReaction
        +TriggerTypeRepeat
        +TriggerTypeMention
    }

    class Decision {
        +bool Allowed
        +string Reason
        +bool ShouldRetry
        +int CooldownSec
        +TriggerType TriggerType
    }

    class ChatStats {
        +float64 CurrentActivityScore
        +float64 CurrentBurstFactor
        +int64 TotalMessages1h
        +int64 TotalMessages24h
    }

    class SmartRateLimiter {
        -*Config config
        -*redis.Client rdb
        -*Metrics metrics
        -map[string]*ChatStats localCache
        -sync.RWMutex localCacheMu
        -map[TriggerType]float64 hardTriggerWeights
        -map[TriggerType]float64 softTriggerWeights
        +Allow(ctx context.Context, chatID string, triggerType TriggerType) (bool, string)
        +RecordSend(ctx context.Context, chatID string, triggerType TriggerType)
        +*ChatStats GetStats(ctx context.Context, chatID string)
        +float64 getSoftLoadMaxPerHour(stats *ChatStats)
        +float64 countMessagesInWindow(recentSends []SendRecord, now time.Time, window time.Duration, weights map[TriggerType]float64)
        +float64 hardWeight(triggerType TriggerType)
        +updateDerivedStats(stats *ChatStats, hourly [24]HourlyStats, recentSends []SendRecord, now time.Time)
        +float64 calculateBurstFactor(recentSends []SendRecord, now time.Time)
        +float64 getAdjustedMinInterval(stats *ChatStats, triggerType TriggerType)
        +int getAdjustedMaxPerHour(stats *ChatStats)
        +int getAdjustedMaxPerDay(stats *ChatStats)
    }

    class Decider {
        - *SmartRateLimiter limiter
        - func(ctx context.Context, chatID string, openID string) int intentReplyThreshold
        - func(ctx context.Context, chatID string, openID string) int intentFallbackRate
        - func() float64 randomFloat64
        +NewDecider(limiter *SmartRateLimiter) *Decider
        +*Decision DecideIntentReply(ctx context.Context, chatID string, openID string, analysis *intent.IntentAnalysis)
        +*Decision DecideRandomReply(ctx context.Context, chatID string, baseProbability float64, analysis *IntentAnalysis) 
        +void RecordReply(ctx context.Context, chatID string, triggerType TriggerType)
        +int getIntentReplyThreshold(ctx context.Context, chatID string, openID string)
        +int getIntentFallbackRate(ctx context.Context, chatID string, openID string)
    }

    class IntentRecognizer {
        +*IntentAnalysis Analyze(ctx context.Context, message string, modelID string)
    }

    class IntentRecognizeOperator {
        +Fetch(ctx context.Context, event *larkim.P2MessageReceiveV1, meta *xhandler.BaseMetaData) error
        -bool shouldForceDirectReplyMode(event *larkim.P2MessageReceiveV1, observation agentruntime.ShadowObservation, observed bool)
        -*intent.IntentAnalysis continuationIntentAnalysis(observation agentruntime.ShadowObservation)
    }

    class ChatMsgOperator {
        +Run(ctx context.Context, event *larkim.P2MessageReceiveV1, meta *xhandler.BaseMetaData) error
    }

    class ConfigProvider {
        +int GetIntentReplyThreshold(ctx context.Context, chatID string, openID string)
        +int GetIntentFallbackRate(ctx context.Context, chatID string, openID string)
        <<external>>
    }

    IntentAnalysis --> ReplyMode : uses
    IntentAnalysis --> SuggestAction : uses
    IntentAnalysis --> InteractionMode : uses

    Decider --> SmartRateLimiter : uses limiter
    Decider --> IntentAnalysis : reads fields
    Decider --> Decision : returns
    Decider --> ChatStats : uses for activity
    Decider ..> ConfigProvider : via intentReplyThreshold, intentFallbackRate

    SmartRateLimiter --> ChatStats : maintains
    SmartRateLimiter --> TriggerType : weighted

    IntentRecognizeOperator --> IntentAnalysis : populates
    IntentRecognizeOperator --> IntentRecognizer : calls LLM
    IntentRecognizeOperator --> Decider : indirectly via ChatMsgOperator

    ChatMsgOperator --> Decider : DecideIntentReply
    ChatMsgOperator ..> IntentAnalysis : from metadata

    class HelperFunctions {
        +TriggerType triggerTypeFromIntentAnalysis(analysis *intent.IntentAnalysis)
        +float64 calculateIntentReplyScore(analysis *intent.IntentAnalysis)
        +float64 adjustIntentFallbackRate(fallbackRate float64, stats *ChatStats, analysis *intent.IntentAnalysis, threshold int, score float64)
        +bool hasHybridSearchSelector(req HybridSearchRequest)
    }

    HelperFunctions ..> IntentAnalysis
    HelperFunctions ..> ChatStats
    HelperFunctions ..> TriggerType

    class GlobalDecider {
        +*Decider GetDecider()
    }

    GlobalDecider --> Decider
Loading

历史搜索与 search_history 工具的类图

classDiagram
    class SearchResult {
        +string MessageID
        +string OpenID
        +string RawMessage
        +string UserName
        +string ChatName
        +string CreateTime
        +string Mentions
    }

    class HybridSearchRequest {
        +[]string QueryText
        +int TopK
        +string OpenID
        +string UserName
        +string ChatID
        +string MessageType
        +string StartTime
        +string EndTime
    }

    class HistorySearcher {
        +[]*SearchResult HybridSearch(ctx context.Context, req HybridSearchRequest, embeddingFunc EmbeddingFunc) ([]*SearchResult, error)
        +map[string]any buildHybridSearchQuery(req HybridSearchRequest, queryTerms []string, queryVecList []map[string]any, now time.Time) (map[string]any, error)
        +[]map[string]any buildHybridSearchFilters(req HybridSearchRequest, now time.Time) ([]map[string]any, error)
        +bool hasHybridSearchSelector(req HybridSearchRequest)
        +void normalizeSearchResultActor(result *SearchResult)
        +gresult.R[time.Time] parseTimeFormat(s string, fmt string)
    }

    class EmbeddingFunc {
        <<interface>>
        +vector []float32
        +tokenUsage model.Usage
        +error err
    }

    class HistorySearchArgs {
        +string Keywords
        +int TopK
        +string StartTime
        +string EndTime
        +string OpenID
        +string UserName
        +string MessageType
    }

    class historySearchHandler {
        +ParseTool(raw string) (HistorySearchArgs, error)
        +ToolSpec() xcommand.ToolSpec
        +Handle(ctx context.Context, data *larkim.P2MessageReceiveV1, metaData *xhandler.BaseMetaData, arg HistorySearchArgs) error
        <<tool handler>>
        -func splitByComma(text string) []string
    }

    class ToolSpec {
        +string Name
        +string Desc
        +*arktools.Param Params
        -ToolResultEncoder Result
    }

    class arktools_Param {
        +NewParams(kind string) *arktools.Param
        +AddProp(name string, prop *arktools.Prop) *arktools.Param
        +AddRequired(name string) *arktools.Param
    }

    class xhandler_BaseMetaData {
        +string ChatID
        +string GetExtra(key string) (string, bool)
        +void SetExtra(key string, value string)
    }

    class opensearchClient {
        +SearchData(ctx context.Context, index string, query map[string]any) (*opensearchapi.SearchResp, error)
    }

    class botidentity_Current {
        +Identity Current()
        <<external>>
    }

    HistorySearcher --> SearchResult : returns
    HistorySearcher --> HybridSearchRequest : input
    HistorySearcher ..> EmbeddingFunc : dependency
    HistorySearcher ..> opensearchClient : queries index
    HistorySearcher ..> botidentity_Current : for OpenID normalization

    historySearchHandler --> HistorySearchArgs
    historySearchHandler --> ToolSpec
    historySearchHandler ..> arktools_Param : build schema
    historySearchHandler ..> HistorySearcher : calls HybridSearch via historyHybridSearchFn
    historySearchHandler ..> xhandler_BaseMetaData : reads ChatID, writes search_result

    ToolSpec --> arktools_Param

    class historyHybridSearchFnVar {
        -func(ctx context.Context, req HybridSearchRequest, embeddingFunc EmbeddingFunc) ([]*SearchResult, error) historyHybridSearchFn
    }

    historyHybridSearchFnVar ..> HistorySearcher
Loading

标准与 Agentic 对话提示词构建器的类图

classDiagram
    class standardPromptMode {
        <<enum>>
        +standardPromptModeDirect
        +standardPromptModeAmbient
    }

    class StandardPlanBuilder {
        +BuildInitialChatExecutionPlan(ctx context.Context, req InitialChatGenerationPlanRequest) (InitialChatExecutionPlan, error)
        -standardPromptMode resolveStandardPromptMode(event *larkim.P2MessageReceiveV1, replyScoped bool) 
        -int standardPromptHistoryLimit(mode standardPromptMode, requested int)
        -string buildStandardChatSystemPrompt(mode standardPromptMode)
        -string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
        -string standardChatLinesBlock(lines []string)
        -string pointerString(value *string)
        -string defaultInitialChatUserNameLoader(ctx context.Context, chatID string, openID string)
    }

    class InitialChatExecutionPlan {
        +*larkim.P2MessageReceiveV1 Event
        +string ChatID
        +string OpenID
        +string Prompt
        +string UserInput
        +[]string Files
        +*arktools.Impl Tools
        +history.MessageList MessageList
    }

    class ChatHandler {
        +runStandardChat(ctx context.Context, event *larkim.P2MessageReceiveV1, chatType string, size *int, args ...string) error
        -standardPromptMode resolveStandardPromptMode(event *larkim.P2MessageReceiveV1)
        -int standardPromptHistoryLimit(mode standardPromptMode, requested int)
        -string buildStandardChatSystemPrompt(mode standardPromptMode)
        -string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
        -string standardChatLinesBlock(lines []string)
        -string pointerString(value *string)
    }

    class agenticChatPromptContext {
        +string UserRequest
        +botidentity.Profile SelfProfile
        +[]string HistoryLines
        +[]string ContextLines
        +[]string Topics
        +bool ReplyScoped
    }

    class AgenticPlanBuilder {
        +buildAgenticChatExecutionPlan(ctx context.Context, req AgenticChatGenerationPlanRequest) (AgenticChatExecutionPlan, error)
        -buildAgenticChatPromptContext(ctx context.Context, ...) (history.MessageList, agenticChatPromptContext, error)
        -string agenticChatSystemPrompt()
        -string buildAgenticChatUserPrompt(ctx agenticChatPromptContext)
        -[]string agenticChatStyleHints(ctx agenticChatPromptContext)
        -string agenticChatLinesBlock(lines []string)
    }

    class ContinuationReplyTurnExecutor {
        +ExecuteContinuationReplyTurn(ctx context.Context, req ContinuationReplyTurnRequest) error
        -string continuationReplyTurnSystemPrompt()
        -string buildContinuationReplyTurnUserPrompt(req ContinuationReplyTurnRequest, selfProfile botidentity.Profile)
        -string replyTurnLinesBlock(lines []string)
    }

    StandardPlanBuilder --> InitialChatExecutionPlan
    StandardPlanBuilder ..> botidentity.Profile : uses
    ChatHandler ..> botidentity.Profile : standardChatBotProfileLoader

    AgenticPlanBuilder --> agenticChatPromptContext
    AgenticPlanBuilder ..> botidentity.Profile : SelfProfile

    ContinuationReplyTurnExecutor ..> botidentity.Profile : continuationReplyBotProfileLoader

    class ark_dal_ResponsesImpl {
        +Do(ctx context.Context, sysPrompt string, userPrompt string, files ...string) (iter.Seq[string], error)
    }

    ark_dal_ResponsesImpl ..> InitialChatExecutionPlan : consumes Prompt, UserInput
    ark_dal_ResponsesImpl ..> AgenticChatExecutionPlan : consumes
    ark_dal_ResponsesImpl ..> ContinuationReplyTurnExecutor : continuation prompts
Loading

文件级改动

Change Details Files
将标准对话流的提示词构造重构为由代码驱动、支持模式感知(直接 vs 环境/旁观),并注入机器人身份和受限的 JSON 回复约定。
  • 移除基于数据库的提示模板以及标准对话流和 Chat Handler 中的内联提示构造。
  • 引入 standardPromptMode(direct/ambient),包含历史长度限制和模式特定的 system prompt 指引。
  • 构建用户提示时包含机器人身份、裁剪后的近期历史、限定范围的上下文以及当前输入,将运行时上下文从 system prompt 挪到 user input 中。
  • 确保提示强调仅输出 JSON,且只包含特定字段,并弱化拟人化语气和 @ 提及的过度使用。
internal/application/lark/agentruntime/chatflow/standard_plan.go
internal/application/lark/handlers/chat_handler.go
internal/application/lark/agentruntime/chatflow/standard_plan_test.go
internal/application/lark/handlers/chat_handler_test.go
引入机器人身份模块,并将机器人自我 Profile 传播到提示词和历史归一化中。
  • 新增 botidentity.Profile,带缓存和对 Lark 应用的查询,以及 PromptIdentityLines 帮助函数。
  • 在标准、Agentic 和续写回复的提示中注入机器人 profile,以明确 self_open_id 和命名。
  • 规范历史搜索和消息过滤,使机器人消息以真实 open_id 存储,但渲染为用户名“你”。
  • 新增测试覆盖 profile 缓存、降级行为,以及各种提示中是否包含身份行。
internal/application/lark/botidentity/profile.go
internal/application/lark/botidentity/profile_test.go
internal/application/lark/agentruntime/chatflow/agentic_plan.go
internal/application/lark/agentruntime/chatflow/agentic_plan_test.go
internal/application/lark/agentruntime/reply_turn.go
internal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.go
internal/application/lark/history/msg.go
internal/application/lark/history/msg_test.go
internal/application/lark/history/utils.go
internal/application/lark/history/search.go
internal/application/lark/history/search_test.go
增强历史搜索能力,并通过更丰富的 search_history 工具对外暴露。
  • 扩展 HybridSearchRequest,新增 user_namemessage_type,以及更严格的 chat_id 要求和 selector 校验。
  • 将查询和过滤构造拆分为辅助函数,支持机器人自我别名(user_id == bot_open_id 或 “你”)。
  • 规范搜索结果,使机器人记录的 OpenID 保持一致,并展示用户名“你”。
  • 更新 search_history 工具的 spec 和 handler 以接受新的过滤条件,强制会话范围,并支持通过注入的 HybridSearch 函数进行测试。
internal/application/lark/history/search.go
internal/application/lark/handlers/history_search_handler.go
internal/application/lark/handlers/history_search_handler_test.go
internal/application/lark/handlers/tools_test.go
扩展意图识别 schema,新增回复模式、用户意愿、中断风险和检索提示,并将其接入限流和日志。
  • IntentAnalysis 中新增 ReplyModewillingnessinterrupt_riskneeds_historyneeds_web 字段,包含 JSON 解析和 Sanitize 逻辑。
  • 扩展意图系统提示,要求模型产出这些新字段,并在意图调用中启用最小化的推理。
  • 在链路跟踪/日志中记录额外的意图属性,并新增测试验证新字段的解析和清洗。
  • 在意图识别 operator 中,对明显的直接交互(单聊、被 @、回复机器人、follow-up)强制使用直接回复模式,并在续写分析中带入直接回复的元数据。
internal/application/lark/intentmeta/types.go
internal/application/lark/intent/recognizer.go
internal/application/lark/intent/recognizer_test.go
internal/application/lark/messages/ops/intent_recognize_op.go
internal/application/lark/messages/ops/intent_recognize_op_test.go
优化智能限流,使其区分触发类型、引入意图级回复评分,并在决策中跟踪触发类型。
  • 扩展 Decider,加入可注入的配置查询和随机数生成器,并在 Decision 中增加 TriggerType;基于回复置信度、用户意愿和中断风险进行评分。
  • 根据意图分析区分触发类型(直接/提及 vs 被动 vs 主动插话),并在直接回复时绕过被动限流。
  • SmartRateLimiter 的权重拆分为硬权重和软权重映射,引入软负载每小时阈值,并确保提及流量不消耗硬预算但会计入软负载。
  • 调整随机回复决策,使用可注入 RNG 和更新后的活跃度阈值,并新增覆盖权重、提及行为、Decider 评分以及配置驱动阈值的综合测试。
internal/application/lark/ratelimit/integration.go
internal/application/lark/ratelimit/rate_limiter.go
internal/application/lark/ratelimit/rate_limiter_test.go
收紧回复和 Agentic 提示,减少拟人化,并在整个运行时中规范 @ 提及的使用。
  • 更新续写回复、Agentic 对话和回复规划器的 system prompt,弱化语气词(“哟”“呀”“啦”)和过度人类化的语气。
  • 增加关于何时以及如何 @ 成员的明确指引,倾向于精简且有目的的提及,以及利用线程延续对话。
  • 调整 send_message 工具描述,推荐少量且有目的的提及,并明确提及格式。
  • 新增测试,验证这些新约束和指引文本是否出现在相关提示和工具描述中。
internal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.go
internal/application/lark/agentruntime/capability/default_reply_planner.go
internal/application/lark/agentruntime/capability/default_reply_planner_test.go
internal/application/lark/agentruntime/chatflow/agentic_plan.go
internal/application/lark/agentruntime/chatflow/agentic_plan_test.go
internal/application/lark/handlers/send_message_handler.go
internal/application/lark/handlers/send_message_handler_test.go
修复机器人消息记录逻辑,在历史中持久化真实机器人 OpenID,同时仍以“你”呈现说话人。
  • 新增 resolveRecordedBotIdentity,用于推导机器人消息的 OpenID 和 userName;当发送方 ID 为空时回退到配置的机器人 OpenID。
  • 在将回复和发送消息日志写入 OpenSearch 时使用解析后的身份,而不是将 OpenID 硬编码为“你”。
  • 新增测试验证在显式 sender ID 和基于配置回退两种情况下的身份解析行为。
  • 使 FilterMessage 和历史归一化逻辑与新的身份处理方式保持一致。
internal/infrastructure/lark_dal/larkmsg/record.go
internal/infrastructure/lark_dal/larkmsg/record_test.go
internal/application/lark/history/msg.go
调整若干基础设施默认值和测试,使其与新行为保持一致。
  • 精简 ark_dal 响应选项,移除未使用的 Text 格式配置,仅保留推理配置。
  • 将 Agentic 流式卡片标题从“BetaGo”重命名为“Agentic Chat”。
  • ToolSpec.Result 标记为 json:"-",以避免结果编码器出现在序列化的工具 schema 中。
  • 在续写回复执行器中传入机器人 profile 给用户提示构造器,并修复 defaultChatGenerationPlanExecutor 中的轻微格式问题。
internal/infrastructure/ark_dal/responses.go
internal/infrastructure/lark_dal/larkmsg/streaming_agentic.go
pkg/xcommand/typed.go
internal/application/lark/agentruntime/reply_turn.go

Tips and commands

Interacting with Sourcery

  • 触发新一轮 Review: 在 Pull Request 中评论 @sourcery-ai review

  • 继续讨论: 直接回复 Sourcery 的 Review 评论即可继续对话。

  • 从 Review 评论生成 GitHub Issue: 在某条 Review 评论下回复,请求 Sourcery 从该评论创建 Issue。你也可以直接回复 @sourcery-ai issue 来生成 Issue。

  • 生成 Pull Request 标题: 在 Pull Request 标题中任意位置写上 @sourcery-ai,即可随时生成标题。也可以在 PR 中评论 @sourcery-ai title 来(重新)生成标题。

  • 生成 Pull Request 摘要: 在 Pull Request 描述正文任意位置写上 @sourcery-ai summary,即可在该位置生成 PR 摘要。也可以评论 @sourcery-ai summary 随时(重新)生成摘要。

  • 生成 Reviewer's Guide: 在 Pull Request 中评论 @sourcery-ai guide,即可随时(重新)生成 Reviewer’s Guide。

  • 一次性解决所有 Sourcery 评论: 在 Pull Request 中评论 @sourcery-ai resolve,将所有 Sourcery 评论标记为已解决。适用于你已经处理完所有评论且不希望再看到它们时。

  • 清除所有 Sourcery Review: 评论 @sourcery-ai dismiss,可以将所有现有的 Sourcery Review 标记为已忽略。若想重新开始一轮全新的 Review,别忘了再评论一次 @sourcery-ai review

Customizing Your Experience

访问你的 dashboard 以:

  • 启用或禁用诸如 Sourcery 自动生成 PR 摘要、Reviewer’s Guide 等 Review 功能。
  • 修改 Review 所使用的语言。
  • 添加、移除或编辑自定义 Review 说明。
  • 调整其他 Review 相关设置。

Getting Help

Original review guide in English

Reviewer's Guide

Refactors standard and agentic chat flows to use code-defined prompts with explicit bot identity, enhances history search and mention normalization, extends intent analysis metadata and integrates it with a smarter rate limiter, fixes bot OpenID persistence in message history, and adds targeted tests around these behaviors and new tools.

Sequence diagram for intent-driven reply with smart rate limiting

sequenceDiagram
    actor User
    participant Lark as LarkServer
    participant ChatOp as ChatMsgOperator
    participant IntentOp as IntentRecognizeOperator
    participant Decider as RateLimitDecider
    participant Limiter as SmartRateLimiter
    participant ChatFlow as ChatFlowExecutor

    User->>Lark: Send message
    Lark->>ChatOp: Deliver P2MessageReceiveV1

    ChatOp->>IntentOp: Fetch intent analysis
    IntentOp-->>ChatOp: IntentAnalysis analysis

    ChatOp->>Decider: DecideIntentReply(ctx, chatID, openID, analysis)
    Decider->>IntentAnalysis: Sanitize()

    alt Direct or mention scenario
        Decider->>Decider: triggerTypeFromIntentAnalysis
        Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeMention}
    else Passive or random scenario
        Decider->>Limiter: Allow(ctx, chatID, triggerType)
        Limiter-->>Decider: allowed, reason
        alt Allowed and score >= threshold
            Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
        else Allowed but score < threshold
            Decider->>Limiter: GetStats(ctx, chatID)
            Limiter-->>Decider: *ChatStats
            Decider->>Decider: calculateIntentReplyScore
            Decider->>Decider: adjustIntentFallbackRate
            Decider->>Decider: randomFloat64()
            alt random pass
                Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
            else random reject
                Decider-->>ChatOp: Decision{Allowed:false, TriggerTypeIntent}
            end
        end
    end

    alt Decision.Allowed == true
        ChatOp->>Decider: RecordReply(ctx, chatID, decision.TriggerType)
        Decider->>Limiter: RecordSend
        Limiter-->>Decider: ok
        ChatOp->>ChatFlow: Run standard or agentic chat
        ChatFlow-->>User: Bot reply message
    else Decision.Allowed == false
        ChatOp-->>Lark: No reply (return)
    end
Loading

Sequence diagram for LLM using search_history tool

sequenceDiagram
    participant LLM as LLMModel
    participant Ark as ResponsesImpl
    participant Tool as search_history_handler
    participant History as HistorySearcher
    participant OS as OpenSearch

    LLM->>Ark: Request completion with tools
    Ark->>LLM: Tool call request(search_history, args)
    LLM-->>Ark: Tool invocation arguments

    Ark->>Tool: Handle(ctx, event, metaData, HistorySearchArgs)
    Tool->>History: HybridSearch(ctx, HybridSearchRequest, EmbeddingFunc)
    History->>History: jieba tokenization, embedding calls
    History->>OS: SearchData(index, query)
    OS-->>History: Search hits
    History->>History: normalizeSearchResultActor, ReplaceMentionToName
    History-->>Tool: []*SearchResult

    Tool->>metaData: SetExtra(search_result, json)
    Tool-->>Ark: success

    Ark-->>LLM: Tool result(json)
    LLM->>Ark: Final reply
    Ark-->>LLM: Completion
Loading

Class diagram for botidentity profile and prompt identity injection

classDiagram
    class Identity {
        +string AppID
        +string BotOpenID
        +bool Valid()
        +string NamespaceKey(prefix string, key string)
    }

    class Profile {
        +string AppID
        +string BotOpenID
        +string BotName
    }

    class botidentity_package {
        +Identity Current()
        +Profile CurrentProfile(ctx context.Context)
        +[]string PromptIdentityLines(profile Profile)
        +Profile getProfileCache(ctx context.Context, identity Identity)
        +Profile resolveProfile(ctx context.Context, identity Identity)
        +string profileCacheKey(identity Identity)
        +Profile fallbackProfile(identity Identity)
        +Profile normalizeProfile(identity Identity, profile Profile)
        +Profile mergeProfile(base Profile, override Profile)
        +string configuredBotName()
        +Profile loadProfileFromLark(ctx context.Context, identity Identity)
        +string applicationName(app *larkapplication.Application)
        +string applicationI18nName(items []*larkapplication.AppI18nInfo, wantKey string)
        +string pointerString(value *string)
        +string firstNonEmpty(values ...string)
        <<package>>
        -func profileLoader(ctx context.Context, identity Identity) (Profile, error)
    }

    class cache_package {
        +T GetOrExecute(ctx context.Context, key string, fn func() (T, error))
        <<external>>
    }

    class infraConfig_package {
        +*Config Get()
        <<external>>
    }

    class Config {
        +*BaseInfo BaseInfo
    }

    class BaseInfo {
        +string RobotName
    }

    class lark_dal_package {
        +Client Client
        <<external>>
    }

    class Client {
        +*ApplicationService Application
    }

    class ApplicationService {
        +*ApplicationV6 V6
    }

    class ApplicationV6 {
        +*ApplicationResource Application
    }

    class ApplicationResource {
        +Get(ctx context.Context, req *larkapplication.GetApplicationReq) (*larkapplication.GetApplicationResp, error)
    }

    class ApplicationResp {
        +bool Success()
        +error Error()
        +*ApplicationData Data
    }

    class ApplicationData {
        +*larkapplication.Application App
    }

    Identity <.. botidentity_package : uses
    Profile <.. botidentity_package : constructs
    botidentity_package --> cache_package : uses cache
    botidentity_package --> infraConfig_package : reads config
    botidentity_package --> lark_dal_package : calls Client
    lark_dal_package --> Client
    Client --> ApplicationService
    ApplicationService --> ApplicationV6
    ApplicationV6 --> ApplicationResource
    ApplicationResource --> ApplicationResp
    ApplicationResp --> ApplicationData
    ApplicationData --> larkapplication.Application

    botidentity_package ..> logs : logging
    botidentity_package ..> context : ctx

    %% Prompt usage in other modules
    class StandardChatPromptBuilder {
        +string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
    }

    class AgenticChatPromptBuilder {
        +string buildAgenticChatUserPrompt(ctx agenticChatPromptContext)
    }

    class ContinuationReplyPromptBuilder {
        +string buildContinuationReplyTurnUserPrompt(req ContinuationReplyTurnRequest, selfProfile botidentity.Profile)
    }

    StandardChatPromptBuilder ..> Profile : embeds identity lines
    AgenticChatPromptBuilder ..> Profile : embeds identity lines
    ContinuationReplyPromptBuilder ..> Profile : embeds identity lines
    StandardChatPromptBuilder ..> botidentity_package : PromptIdentityLines
    AgenticChatPromptBuilder ..> botidentity_package : PromptIdentityLines
    ContinuationReplyPromptBuilder ..> botidentity_package : PromptIdentityLines
Loading

Class diagram for intent analysis and smart rate limiting

classDiagram
    class ReplyMode {
        <<enum>>
        +ReplyModeDirect
        +ReplyModePassiveReply
        +ReplyModeActiveInterject
        +ReplyModeIgnore
    }

    class SuggestAction {
        <<enum>>
        +SuggestActionChat
        +SuggestActionReact
        +SuggestActionRepeat
        +SuggestActionIgnore
    }

    class InteractionMode {
        <<enum>>
        +InteractionModeStandard
        +InteractionModeAgentic
    }

    class IntentType {
        <<enum>>
        +IntentTypeQuestion
        +IntentTypeIgnore
        .. other values ..
    }

    class IntentAnalysis {
        +string IntentType
        +bool NeedReply
        +int ReplyConfidence
        +string Reason
        +SuggestAction SuggestAction
        +InteractionMode InteractionMode
        +ReplyMode ReplyMode
        +int UserWillingness
        +int InterruptRisk
        +bool NeedsHistory
        +bool NeedsWeb
        +responses.ReasoningEffort_Enum ReasoningEffort
        +UnmarshalJSON(data []byte) error
        +Sanitize()
    }

    class TriggerType {
        <<enum>>
        +TriggerTypeIntent
        +TriggerTypeRandom
        +TriggerTypeReaction
        +TriggerTypeRepeat
        +TriggerTypeMention
    }

    class Decision {
        +bool Allowed
        +string Reason
        +bool ShouldRetry
        +int CooldownSec
        +TriggerType TriggerType
    }

    class ChatStats {
        +float64 CurrentActivityScore
        +float64 CurrentBurstFactor
        +int64 TotalMessages1h
        +int64 TotalMessages24h
    }

    class SmartRateLimiter {
        -*Config config
        -*redis.Client rdb
        -*Metrics metrics
        -map[string]*ChatStats localCache
        -sync.RWMutex localCacheMu
        -map[TriggerType]float64 hardTriggerWeights
        -map[TriggerType]float64 softTriggerWeights
        +Allow(ctx context.Context, chatID string, triggerType TriggerType) (bool, string)
        +RecordSend(ctx context.Context, chatID string, triggerType TriggerType)
        +*ChatStats GetStats(ctx context.Context, chatID string)
        +float64 getSoftLoadMaxPerHour(stats *ChatStats)
        +float64 countMessagesInWindow(recentSends []SendRecord, now time.Time, window time.Duration, weights map[TriggerType]float64)
        +float64 hardWeight(triggerType TriggerType)
        +updateDerivedStats(stats *ChatStats, hourly [24]HourlyStats, recentSends []SendRecord, now time.Time)
        +float64 calculateBurstFactor(recentSends []SendRecord, now time.Time)
        +float64 getAdjustedMinInterval(stats *ChatStats, triggerType TriggerType)
        +int getAdjustedMaxPerHour(stats *ChatStats)
        +int getAdjustedMaxPerDay(stats *ChatStats)
    }

    class Decider {
        - *SmartRateLimiter limiter
        - func(ctx context.Context, chatID string, openID string) int intentReplyThreshold
        - func(ctx context.Context, chatID string, openID string) int intentFallbackRate
        - func() float64 randomFloat64
        +NewDecider(limiter *SmartRateLimiter) *Decider
        +*Decision DecideIntentReply(ctx context.Context, chatID string, openID string, analysis *intent.IntentAnalysis)
        +*Decision DecideRandomReply(ctx context.Context, chatID string, baseProbability float64, analysis *IntentAnalysis) 
        +void RecordReply(ctx context.Context, chatID string, triggerType TriggerType)
        +int getIntentReplyThreshold(ctx context.Context, chatID string, openID string)
        +int getIntentFallbackRate(ctx context.Context, chatID string, openID string)
    }

    class IntentRecognizer {
        +*IntentAnalysis Analyze(ctx context.Context, message string, modelID string)
    }

    class IntentRecognizeOperator {
        +Fetch(ctx context.Context, event *larkim.P2MessageReceiveV1, meta *xhandler.BaseMetaData) error
        -bool shouldForceDirectReplyMode(event *larkim.P2MessageReceiveV1, observation agentruntime.ShadowObservation, observed bool)
        -*intent.IntentAnalysis continuationIntentAnalysis(observation agentruntime.ShadowObservation)
    }

    class ChatMsgOperator {
        +Run(ctx context.Context, event *larkim.P2MessageReceiveV1, meta *xhandler.BaseMetaData) error
    }

    class ConfigProvider {
        +int GetIntentReplyThreshold(ctx context.Context, chatID string, openID string)
        +int GetIntentFallbackRate(ctx context.Context, chatID string, openID string)
        <<external>>
    }

    IntentAnalysis --> ReplyMode : uses
    IntentAnalysis --> SuggestAction : uses
    IntentAnalysis --> InteractionMode : uses

    Decider --> SmartRateLimiter : uses limiter
    Decider --> IntentAnalysis : reads fields
    Decider --> Decision : returns
    Decider --> ChatStats : uses for activity
    Decider ..> ConfigProvider : via intentReplyThreshold, intentFallbackRate

    SmartRateLimiter --> ChatStats : maintains
    SmartRateLimiter --> TriggerType : weighted

    IntentRecognizeOperator --> IntentAnalysis : populates
    IntentRecognizeOperator --> IntentRecognizer : calls LLM
    IntentRecognizeOperator --> Decider : indirectly via ChatMsgOperator

    ChatMsgOperator --> Decider : DecideIntentReply
    ChatMsgOperator ..> IntentAnalysis : from metadata

    class HelperFunctions {
        +TriggerType triggerTypeFromIntentAnalysis(analysis *intent.IntentAnalysis)
        +float64 calculateIntentReplyScore(analysis *intent.IntentAnalysis)
        +float64 adjustIntentFallbackRate(fallbackRate float64, stats *ChatStats, analysis *intent.IntentAnalysis, threshold int, score float64)
        +bool hasHybridSearchSelector(req HybridSearchRequest)
    }

    HelperFunctions ..> IntentAnalysis
    HelperFunctions ..> ChatStats
    HelperFunctions ..> TriggerType

    class GlobalDecider {
        +*Decider GetDecider()
    }

    GlobalDecider --> Decider
Loading

Class diagram for history search and search_history tool

classDiagram
    class SearchResult {
        +string MessageID
        +string OpenID
        +string RawMessage
        +string UserName
        +string ChatName
        +string CreateTime
        +string Mentions
    }

    class HybridSearchRequest {
        +[]string QueryText
        +int TopK
        +string OpenID
        +string UserName
        +string ChatID
        +string MessageType
        +string StartTime
        +string EndTime
    }

    class HistorySearcher {
        +[]*SearchResult HybridSearch(ctx context.Context, req HybridSearchRequest, embeddingFunc EmbeddingFunc) ([]*SearchResult, error)
        +map[string]any buildHybridSearchQuery(req HybridSearchRequest, queryTerms []string, queryVecList []map[string]any, now time.Time) (map[string]any, error)
        +[]map[string]any buildHybridSearchFilters(req HybridSearchRequest, now time.Time) ([]map[string]any, error)
        +bool hasHybridSearchSelector(req HybridSearchRequest)
        +void normalizeSearchResultActor(result *SearchResult)
        +gresult.R[time.Time] parseTimeFormat(s string, fmt string)
    }

    class EmbeddingFunc {
        <<interface>>
        +vector []float32
        +tokenUsage model.Usage
        +error err
    }

    class HistorySearchArgs {
        +string Keywords
        +int TopK
        +string StartTime
        +string EndTime
        +string OpenID
        +string UserName
        +string MessageType
    }

    class historySearchHandler {
        +ParseTool(raw string) (HistorySearchArgs, error)
        +ToolSpec() xcommand.ToolSpec
        +Handle(ctx context.Context, data *larkim.P2MessageReceiveV1, metaData *xhandler.BaseMetaData, arg HistorySearchArgs) error
        <<tool handler>>
        -func splitByComma(text string) []string
    }

    class ToolSpec {
        +string Name
        +string Desc
        +*arktools.Param Params
        -ToolResultEncoder Result
    }

    class arktools_Param {
        +NewParams(kind string) *arktools.Param
        +AddProp(name string, prop *arktools.Prop) *arktools.Param
        +AddRequired(name string) *arktools.Param
    }

    class xhandler_BaseMetaData {
        +string ChatID
        +string GetExtra(key string) (string, bool)
        +void SetExtra(key string, value string)
    }

    class opensearchClient {
        +SearchData(ctx context.Context, index string, query map[string]any) (*opensearchapi.SearchResp, error)
    }

    class botidentity_Current {
        +Identity Current()
        <<external>>
    }

    HistorySearcher --> SearchResult : returns
    HistorySearcher --> HybridSearchRequest : input
    HistorySearcher ..> EmbeddingFunc : dependency
    HistorySearcher ..> opensearchClient : queries index
    HistorySearcher ..> botidentity_Current : for OpenID normalization

    historySearchHandler --> HistorySearchArgs
    historySearchHandler --> ToolSpec
    historySearchHandler ..> arktools_Param : build schema
    historySearchHandler ..> HistorySearcher : calls HybridSearch via historyHybridSearchFn
    historySearchHandler ..> xhandler_BaseMetaData : reads ChatID, writes search_result

    ToolSpec --> arktools_Param

    class historyHybridSearchFnVar {
        -func(ctx context.Context, req HybridSearchRequest, embeddingFunc EmbeddingFunc) ([]*SearchResult, error) historyHybridSearchFn
    }

    historyHybridSearchFnVar ..> HistorySearcher
Loading

Class diagram for standard and agentic chat prompt builders

classDiagram
    class standardPromptMode {
        <<enum>>
        +standardPromptModeDirect
        +standardPromptModeAmbient
    }

    class StandardPlanBuilder {
        +BuildInitialChatExecutionPlan(ctx context.Context, req InitialChatGenerationPlanRequest) (InitialChatExecutionPlan, error)
        -standardPromptMode resolveStandardPromptMode(event *larkim.P2MessageReceiveV1, replyScoped bool) 
        -int standardPromptHistoryLimit(mode standardPromptMode, requested int)
        -string buildStandardChatSystemPrompt(mode standardPromptMode)
        -string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
        -string standardChatLinesBlock(lines []string)
        -string pointerString(value *string)
        -string defaultInitialChatUserNameLoader(ctx context.Context, chatID string, openID string)
    }

    class InitialChatExecutionPlan {
        +*larkim.P2MessageReceiveV1 Event
        +string ChatID
        +string OpenID
        +string Prompt
        +string UserInput
        +[]string Files
        +*arktools.Impl Tools
        +history.MessageList MessageList
    }

    class ChatHandler {
        +runStandardChat(ctx context.Context, event *larkim.P2MessageReceiveV1, chatType string, size *int, args ...string) error
        -standardPromptMode resolveStandardPromptMode(event *larkim.P2MessageReceiveV1)
        -int standardPromptHistoryLimit(mode standardPromptMode, requested int)
        -string buildStandardChatSystemPrompt(mode standardPromptMode)
        -string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
        -string standardChatLinesBlock(lines []string)
        -string pointerString(value *string)
    }

    class agenticChatPromptContext {
        +string UserRequest
        +botidentity.Profile SelfProfile
        +[]string HistoryLines
        +[]string ContextLines
        +[]string Topics
        +bool ReplyScoped
    }

    class AgenticPlanBuilder {
        +buildAgenticChatExecutionPlan(ctx context.Context, req AgenticChatGenerationPlanRequest) (AgenticChatExecutionPlan, error)
        -buildAgenticChatPromptContext(ctx context.Context, ...) (history.MessageList, agenticChatPromptContext, error)
        -string agenticChatSystemPrompt()
        -string buildAgenticChatUserPrompt(ctx agenticChatPromptContext)
        -[]string agenticChatStyleHints(ctx agenticChatPromptContext)
        -string agenticChatLinesBlock(lines []string)
    }

    class ContinuationReplyTurnExecutor {
        +ExecuteContinuationReplyTurn(ctx context.Context, req ContinuationReplyTurnRequest) error
        -string continuationReplyTurnSystemPrompt()
        -string buildContinuationReplyTurnUserPrompt(req ContinuationReplyTurnRequest, selfProfile botidentity.Profile)
        -string replyTurnLinesBlock(lines []string)
    }

    StandardPlanBuilder --> InitialChatExecutionPlan
    StandardPlanBuilder ..> botidentity.Profile : uses
    ChatHandler ..> botidentity.Profile : standardChatBotProfileLoader

    AgenticPlanBuilder --> agenticChatPromptContext
    AgenticPlanBuilder ..> botidentity.Profile : SelfProfile

    ContinuationReplyTurnExecutor ..> botidentity.Profile : continuationReplyBotProfileLoader

    class ark_dal_ResponsesImpl {
        +Do(ctx context.Context, sysPrompt string, userPrompt string, files ...string) (iter.Seq[string], error)
    }

    ark_dal_ResponsesImpl ..> InitialChatExecutionPlan : consumes Prompt, UserInput
    ark_dal_ResponsesImpl ..> AgenticChatExecutionPlan : consumes
    ark_dal_ResponsesImpl ..> ContinuationReplyTurnExecutor : continuation prompts
Loading

File-Level Changes

Change Details Files
Refactor standard chat flow prompt building to be code-driven, mode-aware (direct vs ambient), and inject bot identity plus constrained JSON reply contract.
  • Remove DB-driven prompt templates and inline prompt construction in standard chat flow and chat handler.
  • Introduce standardPromptMode (direct/ambient) with history length limits and mode-specific system prompt guidance.
  • Build user prompts that include bot identity, trimmed recent history, scoped context, and current input, moving runtime context from system prompt to user input.
  • Ensure prompts emphasize JSON-only output with specific fields and de-emphasize anthropomorphic tone and overuse of mentions.
internal/application/lark/agentruntime/chatflow/standard_plan.go
internal/application/lark/handlers/chat_handler.go
internal/application/lark/agentruntime/chatflow/standard_plan_test.go
internal/application/lark/handlers/chat_handler_test.go
Introduce a bot identity module and propagate bot self-profile into prompts and history normalization.
  • Add botidentity.Profile with caching and Lark application lookup plus PromptIdentityLines helper.
  • Inject bot profile into standard, agentic, and continuation reply prompts to clarify self_open_id and naming.
  • Normalize history search and message filtering so bot messages are stored with real open_id but rendered as user name '你'.
  • Add tests covering profile caching, fallback behavior, and identity-line presence in various prompts.
internal/application/lark/botidentity/profile.go
internal/application/lark/botidentity/profile_test.go
internal/application/lark/agentruntime/chatflow/agentic_plan.go
internal/application/lark/agentruntime/chatflow/agentic_plan_test.go
internal/application/lark/agentruntime/reply_turn.go
internal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.go
internal/application/lark/history/msg.go
internal/application/lark/history/msg_test.go
internal/application/lark/history/utils.go
internal/application/lark/history/search.go
internal/application/lark/history/search_test.go
Enhance history search capabilities and expose them via a richer search_history tool.
  • Extend HybridSearchRequest with user_name, message_type, and stricter chat_id requirement plus selector validation.
  • Factor query and filter building into helpers that support bot self aliasing (user_id == bot_open_id or '你').
  • Normalize search results so bot entries use OpenID consistently and display user_name '你'.
  • Update search_history tool spec and handler to accept new filters, enforce chat scope, and be testable via injected HybridSearch function.
internal/application/lark/history/search.go
internal/application/lark/handlers/history_search_handler.go
internal/application/lark/handlers/history_search_handler_test.go
internal/application/lark/handlers/tools_test.go
Extend intent recognition schema to include reply mode, user willingness, interrupt risk, and retrieval hints, and wire these into rate limiting and logging.
  • Add ReplyMode, willingness, interrupt_risk, needs_history, and needs_web fields to IntentAnalysis with JSON parsing and Sanitize logic.
  • Expand intent system prompt to instruct the model to produce the new fields and enable minimal reasoning for the intent call.
  • Record additional intent attributes in tracing/logs and add tests verifying parsing and sanitization of the new fields.
  • In intent recognition operator, force direct reply mode for obvious direct interactions (p2p, mentions, reply-to-bot, follow-up) and seed continuation analyses with direct reply metadata.
internal/application/lark/intentmeta/types.go
internal/application/lark/intent/recognizer.go
internal/application/lark/intent/recognizer_test.go
internal/application/lark/messages/ops/intent_recognize_op.go
internal/application/lark/messages/ops/intent_recognize_op_test.go
Refine smart rate limiting to distinguish trigger types, incorporate intent-level reply scoring, and track trigger type in decisions.
  • Augment Decider with injectable config lookups and RNG plus TriggerType on Decision; add scoring based on reply confidence, willingness, and interrupt risk.
  • Differentiate trigger types from intent analysis (direct/mention vs passive vs active interject) and bypass passive limits for direct replies.
  • Split SmartRateLimiter weights into hard and soft maps, introduce soft-load hourly thresholds, and ensure mention traffic does not consume hard budget but contributes to soft load.
  • Adjust random reply decisions to use injectable RNG and updated activity thresholds, and add comprehensive tests for weights, mention behavior, decider scoring, and config-driven thresholds.
internal/application/lark/ratelimit/integration.go
internal/application/lark/ratelimit/rate_limiter.go
internal/application/lark/ratelimit/rate_limiter_test.go
Tighten reply and agentic prompts to reduce anthropomorphism and guide mention usage across the runtime.
  • Update continuation reply, agentic chat, and reply planner system prompts to discourage filler particles ("哟","呀","啦") and over-human tone.
  • Add explicit guidance for when and how to @ members, preferring minimal, purposeful mentions and thread continuation.
  • Adjust send_message tool description to recommend sparing, purposeful mentions and clarify mention formatting.
  • Add tests validating presence of the new constraints and guidance strings in the various prompts and tool descriptions.
internal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.go
internal/application/lark/agentruntime/capability/default_reply_planner.go
internal/application/lark/agentruntime/capability/default_reply_planner_test.go
internal/application/lark/agentruntime/chatflow/agentic_plan.go
internal/application/lark/agentruntime/chatflow/agentic_plan_test.go
internal/application/lark/handlers/send_message_handler.go
internal/application/lark/handlers/send_message_handler_test.go
Fix bot message recording to persist the real bot OpenID while still presenting the speaker as '你' in history.
  • Introduce resolveRecordedBotIdentity to derive OpenID and userName for bot messages, falling back to configured bot OpenID when sender ID is empty.
  • Use the resolved identity when inserting reply and send message logs into OpenSearch instead of hard-coding OpenID to '你'.
  • Add tests for identity resolution behavior with explicit sender ID and config-based fallback.
  • Align FilterMessage and history normalization logic with the new identity handling.
internal/infrastructure/lark_dal/larkmsg/record.go
internal/infrastructure/lark_dal/larkmsg/record_test.go
internal/application/lark/history/msg.go
Adjust miscellaneous infrastructure defaults and tests to align with the new behavior.
  • Simplify ark_dal response options by dropping unused Text format configuration in favor of reasoning config only.
  • Rename agentic streaming card header from 'BetaGo' to 'Agentic Chat'.
  • Mark ToolSpec.Result as json:"-" to keep result encoder out of serialized tool schemas.
  • Wire continuation reply executor to pass bot profile into user prompt builder and fix minor formatting in defaultChatGenerationPlanExecutor.
internal/infrastructure/ark_dal/responses.go
internal/infrastructure/lark_dal/larkmsg/streaming_agentic.go
pkg/xcommand/typed.go
internal/application/lark/agentruntime/reply_turn.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 4 个问题,并给出了一些整体反馈:

  • 标准 Prompt 帮助方法(standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock)现在同时在 chatflow/standard_plan.go 和 handlers/chat_handler.go 中实现;建议将它们抽取到一个共享的包/函数集中,以避免两个代码路径随着时间推移出现分叉。
  • 混合历史搜索现在会拒绝没有任何 selector(keywords/user_id/user_name/message_type/时间范围)的请求,但 search_history tool 的定义中是明确允许空 keywords 的;你可能需要要么放宽 hasHybridSearchSelector 的约束,要么在工具的 handler/描述中增加明确的参数校验/说明,以免 LLM 在调用时省略所有过滤条件时产生意料之外的错误。
给 AI Agent 的提示词
请根据这次代码评审中的评论进行修改:

## 整体评论
- 标准 Prompt 帮助方法(standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock)现在同时在 chatflow/standard_plan.go 和 handlers/chat_handler.go 中实现;建议将它们抽取到一个共享的包/函数集中,以避免两个代码路径随着时间推移出现分叉。
- 混合历史搜索现在会拒绝没有任何 selector(keywords/user_id/user_name/message_type/时间范围)的请求,但 search_history tool 的定义中是明确允许空 keywords 的;你可能需要要么放宽 hasHybridSearchSelector 的约束,要么在工具的 handler/描述中增加明确的参数校验/说明,以免 LLM 在调用时省略所有过滤条件时产生意料之外的错误。

## 单独评论

### Comment 1
<location path="internal/application/lark/ratelimit/integration.go" line_range="68-69" />
<code_context>
 	}

-	allowed, reason := d.limiter.Allow(ctx, chatID, TriggerTypeIntent)
+	triggerType := triggerTypeFromIntentAnalysis(analysis)
+	if triggerType == TriggerTypeMention {
+		return &Decision{
+			Allowed:     true,
</code_context>
<issue_to_address>
**issue (bug_risk):** 直达/mention 形式的回复会完全绕过 SmartRateLimiter,这可能在繁忙的群聊中导致无限制的回复量。

在 `DecideIntentReply` 中,当 `triggerTypeFromIntentAnalysis` 返回 `TriggerTypeMention` 时,函数会直接返回 `Allowed: true`,不会调用 `s.limiter.Allow`,也不会做任何负载/冷却检查(`直达消息绕过被动频控`)。由于 `triggerTypeFromIntentAnalysis` 会把 `ReplyModeDirect` 映射为 `TriggerTypeMention`,而 `shouldForceDirectReplyMode` 又会将很多场景(单聊 / 回复机器人 / follow-up)提升为 `ReplyModeDirect`,因此很大一部分流量可以绕过硬/软配额和突发保护。虽然回复会通过 `RecordReply` 记录,但 mention 本身从不会经过 allow 检查,所以单个群聊可以通过不断 @ 机器人来驱动非常高的吞吐量。建议至少将 `TriggerTypeMention` 走一遍 soft-load 路径(或者增加一个专门的上限),即便对部分基于意图的限制有所放宽。
</issue_to_address>

### Comment 2
<location path="internal/application/lark/history/search_test.go" line_range="11-20" />
<code_context>
+	"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
+)
+
+func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
+	_, err := buildHybridSearchQuery(
+		HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
+		[]string{"机器人"},
+		nil,
+		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
+	)
+	if err == nil {
+		t.Fatal("expected chat_id requirement error")
+	}
+}
+
+func TestBuildHybridSearchQueryIncludesMetadataFilters(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** 增加一个测试,用于覆盖当 chat_id 存在但未提供任何 selector 时 `hasHybridSearchSelector` 的约束。

新逻辑会在 `chat_id` 已设置但没有提供任何 selector(没有非空的 query、user_id、user_name、message_type、start_time、end_time)时返回错误。请增加一个测试,使用一个 `ChatID` 非空、其他字段为空的 `HybridSearchRequest`,并断言 `buildHybridSearchQuery`/`buildHybridSearchFilters` 会返回预期的 "at least one history selector is required" 错误,从而让这个用于防止无界历史扫描的保护逻辑被测试覆盖到。

建议实现:

```golang
import (
	"path/filepath"
	"strings"
	"testing"
	"time"

	"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)

```

```golang
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
		[]string{"机器人"},
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected chat_id requirement error")
	}
}

func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{
			ChatID: "oc_test_chat",
		},
		nil,
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected history selector requirement error when chat_id is set")
	}
	if !strings.Contains(err.Error(), "at least one history selector is required") {
		t.Fatalf("unexpected error: %v", err)
	}
}

```
</issue_to_address>

### Comment 3
<location path="internal/application/lark/ratelimit/rate_limiter_test.go" line_range="301-264" />
<code_context>
+func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):**`DecideIntentReply``analysis` 为 nil 和 `NeedReply` 为 false 的情况补充测试覆盖。

新的 `Decider.DecideIntentReply` 有两条未测试的早返回路径,对安全性很重要:

- `analysis == nil` 时应返回 `Allowed=false`、reason 为 `"意图分析缺失"`,以及期望的 `TriggerType` 零值。
- `analysis.NeedReply == false` 时应在任何 limiter 调用前短路返回,并在 `Decision` 中给出清楚表明“不需要回复”的 reason。

请增加一个小型表驱动测试(或两个聚焦测试),覆盖这些场景,并断言返回结果,同时在 `NeedReply == false` 的路径上断言 limiter 不会被调用,以防在这些退化场景中出现回归。

建议实现:

```golang
func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
	s, rdb := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0.0,
		CooldownBaseSeconds: 1.0,
		MaxCooldownSeconds:  10.0,

```

我目前只能看到 `TestDeciderUsesConfigDrivenThresholdAndReplyMode` 的开头,所以会描述应当在 `rate_limiter_test.go` 底部(现有 decider 测试之后,复用文件/包中已有的 helper 和类型)追加的额外测试:

1. **添加一个用于断言 limiter 未被调用的 fake limiter:**

```go
type noopLimiter struct {
	called bool
}

func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
	n.called = true
	return true, ""
}
```

如果你现有的 `Limiter` 接口方法签名不同,请做相应调整。

2. **测试 `analysis == nil` 时返回安全兜底的决策:**

```go
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0,
		CooldownBaseSeconds: 1,
		MaxCooldownSeconds:  10,
	})

	decider := NewDecider(limiter /* plus any other dependencies you already pass here */)

	decision := decider.DecideIntentReply(ctx, "chat-1", nil)

	if decision.Allowed {
		t.Fatalf("expected Allowed=false when analysis is nil, got true")
	}
	if decision.Reason != "意图分析缺失" {
		t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
	}
	if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
		t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
	}
}
```

根据你的实际类型调整 `NewDecider` 的入参以及决策结构体的字段名(例如 `Decision.TriggerType` vs `Decision.Trigger`)。

3. **测试 `analysis.NeedReply == false` 时会短路而且不会调用 limiter:**

```go
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	nl := &noopLimiter{}

	decider := NewDecider(nl /* plus any other dependencies you already pass here */)

	analysis := &IntentAnalysis{
		NeedReply: false,
		// fill any other required fields to produce a valid struct
	}

	decision := decider.DecideIntentReply(ctx, "chat-2", analysis)

	if nl.called {
		t.Fatalf("expected limiter not to be called when NeedReply=false")
	}
	if decision.Allowed {
		t.Fatalf("expected Allowed=false when NeedReply=false, got true")
	}
	if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
		t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
	}
}
```

4. **imports / helper:**

- 确保在测试文件顶部引入 `strings`

```go
import (
    "context"
    "strings"
    "testing"
    // ...existing imports
)
```

- 如果你的 `Decision``IntentAnalysis` 类型在其他包中(如 `ratelimit``lark`),请相应加上包名前缀(例如 `ratelimit.IntentAnalysis`)。

这些补充测试可以完整覆盖 `Decider.DecideIntentReply` 的两条早返回安全路径,并同时断言返回结果以及在 `NeedReply == false` 时 limiter 不会被调用。
</issue_to_address>

### Comment 4
<location path="internal/application/lark/botidentity/profile_test.go" line_range="22-20" />
<code_context>
+func TestGetProfileCacheCachesByIdentity(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** 考虑为 `PromptIdentityLines` 增加一个聚焦测试,用来锁定 identity prompt 的约定。

由于很多新的 prompt 测试(standard、agentic、continuation)都是通过更高层的构造函数间接地测试 `PromptIdentityLines`,增加一个直接的单元测试会有助于明确它的契约。例如,添加一个测试:

- 调用 `PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"})`
- 断言返回的切片中包含预期的 `self_open_id``self_app_id``self_name` 条目,以及关于 sender 的 `user_id`/`open_id` 和 mention 目标 `open_id` 的说明性文本行。

这样一来,如果 identity prompt 的格式发生改变,就会有一条单独、明确的测试定义预期行为。

建议实现:

```golang
func useWorkspaceConfigPath(t *testing.T) {
	t.Helper()
	configPath, err := filepath.Abs("../../../../.dev/config.toml")
	if err != nil {
		t.Fatalf("resolve config path: %v", err)
	}
		t.Setenv("BETAGO_CONFIG_PATH", configPath)
}

func TestPromptIdentityLines_BasicContract(t *testing.T) {
	profile := Profile{
		AppID:     "cli_x",
		BotOpenID: "ou_bot",
		BotName:   "BetaGo",
	}

	lines := PromptIdentityLines(profile)

	assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
	assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
	assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})

	// Explanatory line(s) about sender identifiers (user_id/open_id).
	assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
	// Explanatory line(s) about mention targets by open_id.
	assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}

func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
	t.Helper()

	for _, line := range lines {
		matches := true
		for _, sub := range substrs {
			if !strings.Contains(line, sub) {
				matches = false
				break
			}
		}
		if matches {
			return
		}
	}

	t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}

func TestGetProfileCacheCachesByIdentity(t *testing.T) {

```

1. 确保在 `internal/application/lark/botidentity/profile_test.go` 顶部的 import 块中加入 `"strings"`
2. 如果该测试文件中已经有类似 `assertLineContainsAll` 的通用断言 helper,建议复用它而不是新增一个,以保持风格一致。
</issue_to_address>

Sourcery 对开源项目免费 —— 如果你觉得这次评审有帮助,可以考虑分享一下 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进之后的评审体验。
Original comment in English

Hey - I've found 4 issues, and left some high level feedback:

  • The standard prompt helpers (standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock) are now implemented in both chatflow/standard_plan.go and handlers/chat_handler.go; consider extracting them into a shared package/function set to avoid divergence between the two code paths over time.
  • Hybrid history search now rejects requests without any selector (keywords/user_id/user_name/message_type/time range), but the search_history tool spec explicitly allows empty keywords; you may want to either relax hasHybridSearchSelector or add explicit argument validation/clarification in the tool handler/description so LLM calls that omit all filters don’t result in unexpected errors.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The standard prompt helpers (standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock) are now implemented in both chatflow/standard_plan.go and handlers/chat_handler.go; consider extracting them into a shared package/function set to avoid divergence between the two code paths over time.
- Hybrid history search now rejects requests without any selector (keywords/user_id/user_name/message_type/time range), but the search_history tool spec explicitly allows empty keywords; you may want to either relax hasHybridSearchSelector or add explicit argument validation/clarification in the tool handler/description so LLM calls that omit all filters don’t result in unexpected errors.

## Individual Comments

### Comment 1
<location path="internal/application/lark/ratelimit/integration.go" line_range="68-69" />
<code_context>
 	}

-	allowed, reason := d.limiter.Allow(ctx, chatID, TriggerTypeIntent)
+	triggerType := triggerTypeFromIntentAnalysis(analysis)
+	if triggerType == TriggerTypeMention {
+		return &Decision{
+			Allowed:     true,
</code_context>
<issue_to_address>
**issue (bug_risk):** Direct/mention-style replies bypass the SmartRateLimiter entirely, which may cause unbounded reply volume in busy chats.

In `DecideIntentReply`, when `triggerTypeFromIntentAnalysis` returns `TriggerTypeMention`, the function returns `Allowed: true` without invoking `s.limiter.Allow` or any load/cooldown checks (`直达消息绕过被动频控`). Because `triggerTypeFromIntentAnalysis` maps `ReplyModeDirect` to `TriggerTypeMention` and `shouldForceDirectReplyMode` upgrades many cases (P2P / reply-to-bot / follow-up) into `ReplyModeDirect`, a large share of traffic can skip hard/soft quotas and burst protection. Replies are logged via `RecordReply`, but mentions themselves are never subject to allow checks, so a single chat can drive very high throughput by continuously pinging the bot. Please consider at least routing `TriggerTypeMention` through the soft-load path (or adding a dedicated upper bound) even if some intent-based limits are relaxed.
</issue_to_address>

### Comment 2
<location path="internal/application/lark/history/search_test.go" line_range="11-20" />
<code_context>
+	"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
+)
+
+func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
+	_, err := buildHybridSearchQuery(
+		HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
+		[]string{"机器人"},
+		nil,
+		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
+	)
+	if err == nil {
+		t.Fatal("expected chat_id requirement error")
+	}
+}
+
+func TestBuildHybridSearchQueryIncludesMetadataFilters(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test to cover the `hasHybridSearchSelector` constraint when chat_id is present but no selectors are provided.

The new logic returns an error when `chat_id` is set but no selectors are provided (no non-blank query, user_id, user_name, message_type, start_time, end_time). Please add a test that uses a `HybridSearchRequest` with a non-empty `ChatID` and all other fields empty, and asserts that `buildHybridSearchQuery`/`buildHybridSearchFilters` returns the expected "at least one history selector is required" error, so this guard against unbounded history scans is covered by tests.

Suggested implementation:

```golang
import (
	"path/filepath"
	"strings"
	"testing"
	"time"

	"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)

```

```golang
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
		[]string{"机器人"},
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected chat_id requirement error")
	}
}

func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{
			ChatID: "oc_test_chat",
		},
		nil,
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected history selector requirement error when chat_id is set")
	}
	if !strings.Contains(err.Error(), "at least one history selector is required") {
		t.Fatalf("unexpected error: %v", err)
	}
}

```
</issue_to_address>

### Comment 3
<location path="internal/application/lark/ratelimit/rate_limiter_test.go" line_range="301-264" />
<code_context>
+func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** Add coverage for `DecideIntentReply` when `analysis` is nil and when `NeedReply` is false.

The new `Decider.DecideIntentReply` has two untested early-return paths that are important for safety:

- `analysis == nil` should return `Allowed=false`, reason `"意图分析缺失"`, and the intended zero-value `TriggerType`.
- `analysis.NeedReply == false` should short-circuit before any limiter calls and return a `Decision` whose reason clearly reflects that no reply is needed.

Please add a small table-driven test (or two focused tests) that covers these cases and asserts both the outputs and that the limiter is not consulted in the `NeedReply == false` path, to prevent regressions in these degenerate scenarios.

Suggested implementation:

```golang
func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
	s, rdb := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0.0,
		CooldownBaseSeconds: 1.0,
		MaxCooldownSeconds:  10.0,

```

I can only see the beginning of `TestDeciderUsesConfigDrivenThresholdAndReplyMode`, so I’ll describe the additional tests that should be appended near the bottom of `rate_limiter_test.go` (after the existing decider tests, reusing the same helpers and types you already have in the file/package):

1. **Add a fake limiter that lets us assert it is not called:**

```go
type noopLimiter struct {
	called bool
}

func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
	n.called = true
	return true, ""
}
```

Adjust the type/method signature to match your existing `Limiter` interface if it differs.

2. **Test when `analysis == nil` returns the safety guard decision:**

```go
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0,
		CooldownBaseSeconds: 1,
		MaxCooldownSeconds:  10,
	})

	decider := NewDecider(limiter /* plus any other dependencies you already pass here */)

	decision := decider.DecideIntentReply(ctx, "chat-1", nil)

	if decision.Allowed {
		t.Fatalf("expected Allowed=false when analysis is nil, got true")
	}
	if decision.Reason != "意图分析缺失" {
		t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
	}
	if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
		t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
	}
}
```

Adjust `NewDecider` arguments and the decision struct field names to match your actual types (e.g. `Decision.TriggerType` vs `Decision.Trigger`).

3. **Test when `analysis.NeedReply == false` short-circuits and does not consult the limiter:**

```go
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	nl := &noopLimiter{}

	decider := NewDecider(nl /* plus any other dependencies you already pass here */)

	analysis := &IntentAnalysis{
		NeedReply: false,
		// fill any other required fields to produce a valid struct
	}

	decision := decider.DecideIntentReply(ctx, "chat-2", analysis)

	if nl.called {
		t.Fatalf("expected limiter not to be called when NeedReply=false")
	}
	if decision.Allowed {
		t.Fatalf("expected Allowed=false when NeedReply=false, got true")
	}
	if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
		t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
	}
}
```

4. **Imports / helpers:**

- Ensure `strings` is imported at the top of the test file:

```go
import (
    "context"
    "strings"
    "testing"
    // ...existing imports
)
```

- If your `Decision` or `IntentAnalysis` types live in another package (e.g. `ratelimit` or `lark`), qualify them appropriately (e.g. `ratelimit.IntentAnalysis`).

These additions will fully cover the two early-return safety paths for `Decider.DecideIntentReply` and assert both the outputs and that the limiter is not consulted when `NeedReply == false`.
</issue_to_address>

### Comment 4
<location path="internal/application/lark/botidentity/profile_test.go" line_range="22-20" />
<code_context>
+func TestGetProfileCacheCachesByIdentity(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a focused test for `PromptIdentityLines` to lock in the identity prompt contract.

Since many of the new prompt tests (standard, agentic, continuation) only exercise `PromptIdentityLines` indirectly through higher-level builders, a direct unit test would help pin down its contract. For example, add a test that:

- Calls `PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"})`.
- Asserts the slice includes the expected `self_open_id`, `self_app_id`, `self_name` entries and the explanatory lines about sender `user_id`/`open_id` and mention target `open_id`.

That way, if the identity prompt format changes, there’s a single, explicit test defining the intended behavior.

Suggested implementation:

```golang
func useWorkspaceConfigPath(t *testing.T) {
	t.Helper()
	configPath, err := filepath.Abs("../../../../.dev/config.toml")
	if err != nil {
		t.Fatalf("resolve config path: %v", err)
	}
	t.Setenv("BETAGO_CONFIG_PATH", configPath)
}

func TestPromptIdentityLines_BasicContract(t *testing.T) {
	profile := Profile{
		AppID:     "cli_x",
		BotOpenID: "ou_bot",
		BotName:   "BetaGo",
	}

	lines := PromptIdentityLines(profile)

	assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
	assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
	assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})

	// Explanatory line(s) about sender identifiers (user_id/open_id).
	assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
	// Explanatory line(s) about mention targets by open_id.
	assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}

func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
	t.Helper()

	for _, line := range lines {
		matches := true
		for _, sub := range substrs {
			if !strings.Contains(line, sub) {
				matches = false
				break
			}
		}
		if matches {
			return
		}
	}

	t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}

func TestGetProfileCacheCachesByIdentity(t *testing.T) {

```

1. Ensure `strings` is imported at the top of `internal/application/lark/botidentity/profile_test.go`, e.g. add `"strings"` to the existing import block.
2. If the test file already defines a generic assertion helper similar to `assertLineContainsAll`, you may want to reuse that instead of introducing a new helper to keep style consistent.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +68 to +69
triggerType := triggerTypeFromIntentAnalysis(analysis)
if triggerType == TriggerTypeMention {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): 直达/mention 形式的回复会完全绕过 SmartRateLimiter,这可能在繁忙的群聊中导致无限制的回复量。

DecideIntentReply 中,当 triggerTypeFromIntentAnalysis 返回 TriggerTypeMention 时,函数会直接返回 Allowed: true,不会调用 s.limiter.Allow,也不会做任何负载/冷却检查(直达消息绕过被动频控)。由于 triggerTypeFromIntentAnalysis 会把 ReplyModeDirect 映射为 TriggerTypeMention,而 shouldForceDirectReplyMode 又会将很多场景(单聊 / 回复机器人 / follow-up)提升为 ReplyModeDirect,因此很大一部分流量可以绕过硬/软配额和突发保护。虽然回复会通过 RecordReply 记录,但 mention 本身从不会经过 allow 检查,所以单个群聊可以通过不断 @ 机器人来驱动非常高的吞吐量。建议至少将 TriggerTypeMention 走一遍 soft-load 路径(或者增加一个专门的上限),即便对部分基于意图的限制有所放宽。

Original comment in English

issue (bug_risk): Direct/mention-style replies bypass the SmartRateLimiter entirely, which may cause unbounded reply volume in busy chats.

In DecideIntentReply, when triggerTypeFromIntentAnalysis returns TriggerTypeMention, the function returns Allowed: true without invoking s.limiter.Allow or any load/cooldown checks (直达消息绕过被动频控). Because triggerTypeFromIntentAnalysis maps ReplyModeDirect to TriggerTypeMention and shouldForceDirectReplyMode upgrades many cases (P2P / reply-to-bot / follow-up) into ReplyModeDirect, a large share of traffic can skip hard/soft quotas and burst protection. Replies are logged via RecordReply, but mentions themselves are never subject to allow checks, so a single chat can drive very high throughput by continuously pinging the bot. Please consider at least routing TriggerTypeMention through the soft-load path (or adding a dedicated upper bound) even if some intent-based limits are relaxed.

Comment on lines +11 to +20
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
[]string{"机器人"},
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected chat_id requirement error")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 增加一个测试,用于覆盖当 chat_id 存在但未提供任何 selector 时 hasHybridSearchSelector 的约束。

新逻辑会在 chat_id 已设置但没有提供任何 selector(没有非空的 query、user_id、user_name、message_type、start_time、end_time)时返回错误。请增加一个测试,使用一个 ChatID 非空、其他字段为空的 HybridSearchRequest,并断言 buildHybridSearchQuery/buildHybridSearchFilters 会返回预期的 "at least one history selector is required" 错误,从而让这个用于防止无界历史扫描的保护逻辑被测试覆盖到。

建议实现:

import (
	"path/filepath"
	"strings"
	"testing"
	"time"

	"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
		[]string{"机器人"},
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected chat_id requirement error")
	}
}

func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{
			ChatID: "oc_test_chat",
		},
		nil,
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected history selector requirement error when chat_id is set")
	}
	if !strings.Contains(err.Error(), "at least one history selector is required") {
		t.Fatalf("unexpected error: %v", err)
	}
}
Original comment in English

suggestion (testing): Add a test to cover the hasHybridSearchSelector constraint when chat_id is present but no selectors are provided.

The new logic returns an error when chat_id is set but no selectors are provided (no non-blank query, user_id, user_name, message_type, start_time, end_time). Please add a test that uses a HybridSearchRequest with a non-empty ChatID and all other fields empty, and asserts that buildHybridSearchQuery/buildHybridSearchFilters returns the expected "at least one history selector is required" error, so this guard against unbounded history scans is covered by tests.

Suggested implementation:

import (
	"path/filepath"
	"strings"
	"testing"
	"time"

	"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
		[]string{"机器人"},
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected chat_id requirement error")
	}
}

func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
	_, err := buildHybridSearchQuery(
		HybridSearchRequest{
			ChatID: "oc_test_chat",
		},
		nil,
		nil,
		time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
	)
	if err == nil {
		t.Fatal("expected history selector requirement error when chat_id is set")
	}
	if !strings.Contains(err.Error(), "at least one history selector is required") {
		t.Fatalf("unexpected error: %v", err)
	}
}

if !allowed {
t.Fatalf("active interject should be allowed initially: %s", reason)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing):DecideIntentReplyanalysis 为 nil 和 NeedReply 为 false 的情况补充测试覆盖。

新的 Decider.DecideIntentReply 有两条未测试的早返回路径,对安全性很重要:

  • analysis == nil 时应返回 Allowed=false、reason 为 "意图分析缺失",以及期望的 TriggerType 零值。
  • analysis.NeedReply == false 时应在任何 limiter 调用前短路返回,并在 Decision 中给出清楚表明“不需要回复”的 reason。

请增加一个小型表驱动测试(或两个聚焦测试),覆盖这些场景,并断言返回结果,同时在 NeedReply == false 的路径上断言 limiter 不会被调用,以防在这些退化场景中出现回归。

建议实现:

func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
	s, rdb := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0.0,
		CooldownBaseSeconds: 1.0,
		MaxCooldownSeconds:  10.0,

我目前只能看到 TestDeciderUsesConfigDrivenThresholdAndReplyMode 的开头,所以会描述应当在 rate_limiter_test.go 底部(现有 decider 测试之后,复用文件/包中已有的 helper 和类型)追加的额外测试:

  1. 添加一个用于断言 limiter 未被调用的 fake limiter:
type noopLimiter struct {
	called bool
}

func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
	n.called = true
	return true, ""
}

如果你现有的 Limiter 接口方法签名不同,请做相应调整。

  1. 测试 analysis == nil 时返回安全兜底的决策:
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0,
		CooldownBaseSeconds: 1,
		MaxCooldownSeconds:  10,
	})

	decider := NewDecider(limiter /* plus any other dependencies you already pass here */)

	decision := decider.DecideIntentReply(ctx, "chat-1", nil)

	if decision.Allowed {
		t.Fatalf("expected Allowed=false when analysis is nil, got true")
	}
	if decision.Reason != "意图分析缺失" {
		t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
	}
	if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
		t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
	}
}

根据你的实际类型调整 NewDecider 的入参以及决策结构体的字段名(例如 Decision.TriggerType vs Decision.Trigger)。

  1. 测试 analysis.NeedReply == false 时会短路而且不会调用 limiter:
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	nl := &noopLimiter{}

	decider := NewDecider(nl /* plus any other dependencies you already pass here */)

	analysis := &IntentAnalysis{
		NeedReply: false,
		// fill any other required fields to produce a valid struct
	}

	decision := decider.DecideIntentReply(ctx, "chat-2", analysis)

	if nl.called {
		t.Fatalf("expected limiter not to be called when NeedReply=false")
	}
	if decision.Allowed {
		t.Fatalf("expected Allowed=false when NeedReply=false, got true")
	}
	if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
		t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
	}
}
  1. imports / helper:
  • 确保在测试文件顶部引入 strings
import (
    "context"
    "strings"
    "testing"
    // ...existing imports
)
  • 如果你的 DecisionIntentAnalysis 类型在其他包中(如 ratelimitlark),请相应加上包名前缀(例如 ratelimit.IntentAnalysis)。

这些补充测试可以完整覆盖 Decider.DecideIntentReply 的两条早返回安全路径,并同时断言返回结果以及在 NeedReply == false 时 limiter 不会被调用。

Original comment in English

suggestion (testing): Add coverage for DecideIntentReply when analysis is nil and when NeedReply is false.

The new Decider.DecideIntentReply has two untested early-return paths that are important for safety:

  • analysis == nil should return Allowed=false, reason "意图分析缺失", and the intended zero-value TriggerType.
  • analysis.NeedReply == false should short-circuit before any limiter calls and return a Decision whose reason clearly reflects that no reply is needed.

Please add a small table-driven test (or two focused tests) that covers these cases and asserts both the outputs and that the limiter is not consulted in the NeedReply == false path, to prevent regressions in these degenerate scenarios.

Suggested implementation:

func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
	s, rdb := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0.0,
		CooldownBaseSeconds: 1.0,
		MaxCooldownSeconds:  10.0,

I can only see the beginning of TestDeciderUsesConfigDrivenThresholdAndReplyMode, so I’ll describe the additional tests that should be appended near the bottom of rate_limiter_test.go (after the existing decider tests, reusing the same helpers and types you already have in the file/package):

  1. Add a fake limiter that lets us assert it is not called:
type noopLimiter struct {
	called bool
}

func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
	n.called = true
	return true, ""
}

Adjust the type/method signature to match your existing Limiter interface if it differs.

  1. Test when analysis == nil returns the safety guard decision:
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	limiter := NewSmartRateLimiter(&Config{
		MaxMessagesPerHour:  100,
		MaxMessagesPerDay:   100,
		MinIntervalSeconds:  0,
		CooldownBaseSeconds: 1,
		MaxCooldownSeconds:  10,
	})

	decider := NewDecider(limiter /* plus any other dependencies you already pass here */)

	decision := decider.DecideIntentReply(ctx, "chat-1", nil)

	if decision.Allowed {
		t.Fatalf("expected Allowed=false when analysis is nil, got true")
	}
	if decision.Reason != "意图分析缺失" {
		t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
	}
	if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
		t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
	}
}

Adjust NewDecider arguments and the decision struct field names to match your actual types (e.g. Decision.TriggerType vs Decision.Trigger).

  1. Test when analysis.NeedReply == false short-circuits and does not consult the limiter:
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
	s, _ := setupTestRedis(t)
	defer s.Close()

	ctx := context.Background()
	nl := &noopLimiter{}

	decider := NewDecider(nl /* plus any other dependencies you already pass here */)

	analysis := &IntentAnalysis{
		NeedReply: false,
		// fill any other required fields to produce a valid struct
	}

	decision := decider.DecideIntentReply(ctx, "chat-2", analysis)

	if nl.called {
		t.Fatalf("expected limiter not to be called when NeedReply=false")
	}
	if decision.Allowed {
		t.Fatalf("expected Allowed=false when NeedReply=false, got true")
	}
	if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
		t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
	}
}
  1. Imports / helpers:
  • Ensure strings is imported at the top of the test file:
import (
    "context"
    "strings"
    "testing"
    // ...existing imports
)
  • If your Decision or IntentAnalysis types live in another package (e.g. ratelimit or lark), qualify them appropriately (e.g. ratelimit.IntentAnalysis).

These additions will fully cover the two early-return safety paths for Decider.DecideIntentReply and assert both the outputs and that the limiter is not consulted when NeedReply == false.

t.Fatalf("resolve config path: %v", err)
}
t.Setenv("BETAGO_CONFIG_PATH", configPath)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 考虑为 PromptIdentityLines 增加一个聚焦测试,用来锁定 identity prompt 的约定。

由于很多新的 prompt 测试(standard、agentic、continuation)都是通过更高层的构造函数间接地测试 PromptIdentityLines,增加一个直接的单元测试会有助于明确它的契约。例如,添加一个测试:

  • 调用 PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"})
  • 断言返回的切片中包含预期的 self_open_idself_app_idself_name 条目,以及关于 sender 的 user_id/open_id 和 mention 目标 open_id 的说明性文本行。

这样一来,如果 identity prompt 的格式发生改变,就会有一条单独、明确的测试定义预期行为。

建议实现:

func useWorkspaceConfigPath(t *testing.T) {
	t.Helper()
	configPath, err := filepath.Abs("../../../../.dev/config.toml")
	if err != nil {
		t.Fatalf("resolve config path: %v", err)
	}
		t.Setenv("BETAGO_CONFIG_PATH", configPath)
}

func TestPromptIdentityLines_BasicContract(t *testing.T) {
	profile := Profile{
		AppID:     "cli_x",
		BotOpenID: "ou_bot",
		BotName:   "BetaGo",
	}

	lines := PromptIdentityLines(profile)

	assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
	assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
	assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})

	// Explanatory line(s) about sender identifiers (user_id/open_id).
	assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
	// Explanatory line(s) about mention targets by open_id.
	assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}

func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
	t.Helper()

	for _, line := range lines {
		matches := true
		for _, sub := range substrs {
			if !strings.Contains(line, sub) {
				matches = false
				break
			}
		}
		if matches {
			return
		}
	}

	t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}

func TestGetProfileCacheCachesByIdentity(t *testing.T) {
  1. 确保在 internal/application/lark/botidentity/profile_test.go 顶部的 import 块中加入 "strings"
  2. 如果该测试文件中已经有类似 assertLineContainsAll 的通用断言 helper,建议复用它而不是新增一个,以保持风格一致。
Original comment in English

suggestion (testing): Consider adding a focused test for PromptIdentityLines to lock in the identity prompt contract.

Since many of the new prompt tests (standard, agentic, continuation) only exercise PromptIdentityLines indirectly through higher-level builders, a direct unit test would help pin down its contract. For example, add a test that:

  • Calls PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"}).
  • Asserts the slice includes the expected self_open_id, self_app_id, self_name entries and the explanatory lines about sender user_id/open_id and mention target open_id.

That way, if the identity prompt format changes, there’s a single, explicit test defining the intended behavior.

Suggested implementation:

func useWorkspaceConfigPath(t *testing.T) {
	t.Helper()
	configPath, err := filepath.Abs("../../../../.dev/config.toml")
	if err != nil {
		t.Fatalf("resolve config path: %v", err)
	}
	t.Setenv("BETAGO_CONFIG_PATH", configPath)
}

func TestPromptIdentityLines_BasicContract(t *testing.T) {
	profile := Profile{
		AppID:     "cli_x",
		BotOpenID: "ou_bot",
		BotName:   "BetaGo",
	}

	lines := PromptIdentityLines(profile)

	assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
	assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
	assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})

	// Explanatory line(s) about sender identifiers (user_id/open_id).
	assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
	// Explanatory line(s) about mention targets by open_id.
	assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}

func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
	t.Helper()

	for _, line := range lines {
		matches := true
		for _, sub := range substrs {
			if !strings.Contains(line, sub) {
				matches = false
				break
			}
		}
		if matches {
			return
		}
	}

	t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}

func TestGetProfileCacheCachesByIdentity(t *testing.T) {
  1. Ensure strings is imported at the top of internal/application/lark/botidentity/profile_test.go, e.g. add "strings" to the existing import block.
  2. If the test file already defines a generic assertion helper similar to assertLineContainsAll, you may want to reuse that instead of introducing a new helper to keep style consistent.

@kevinmatthe kevinmatthe merged commit b6cff67 into master Mar 26, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant