Conversation
- 实现机器人身份信息的飞书拉取、缓存和降级逻辑 - 提供prompt身份注入能力,帮助LLM明确自身角色边界 - 统一历史消息中机器人自身发言的标记逻辑,统一显示为「你」 - 修复消息记录时机器人openID未正确持久化的问题 ✨ feat(history): 升级历史搜索能力 - 新增按用户名、消息类型过滤的查询维度 - 优化@提及替换逻辑,自动将机器人自身的@标记为「你」 - 新增search_history工具,支持LLM运行时按需检索会话历史 - 移除对话流程默认前置召回,改为LLM按需调用,降低无效检索开销 ✨ feat(intent): 扩展意图识别维度 - 新增回复模式、用户意愿度、打断风险等识别字段 - 新增是否需要历史/联网检索的预判断能力 - 自动将@、私聊、回复机器人的场景标记为直达回复,绕过被动频控 - 优化意图结果校验与归一化逻辑,提升识别结果可靠性 ✨ feat(ratelimit): 优化智能频控策略 - 区分硬流量预算和软负载阈值,直达消息不占用被动回复配额 - 基于回复置信度、用户意愿、打断风险计算综合回复评分,动态调整阈值 - 不同回复模式对应不同频控权重,降低主动插话的打扰概率 ✨ feat(prompt): 统一优化全链路回复风格 - 新增语气词限制规则,禁止使用「哟」「呀」「啦」等口头禅,降低拟人感 - 统一@使用规范,仅在需要特定成员响应时才@,禁止无意义乱@ - 所有对话prompt统一注入机器人身份信息,避免角色认知错误 ♻️ refactor(chatflow): 重构标准对话流程 - 移除数据库prompt模板依赖,改为代码内统一维护 - 拆分direct/ambient两种回复模式的prompt逻辑,适配不同场景 - 移除默认前置上下文召回,改为LLM按需调用搜索工具获取信息 ✅ test: 新增核心功能单元测试覆盖 - 覆盖botidentity、历史搜索、意图识别、频控等模块核心逻辑 - 新增prompt约束、工具参数校验等场景的测试用例
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Reviewer's Guide重构标准对话和 Agentic 对话流程,使其使用代码定义的提示词并显式声明机器人身份;增强历史搜索和提及归一化;扩展意图分析元数据并将其与更智能的限流器集成;修复机器人 OpenID 在消息历史中的持久化问题;并围绕这些行为和新工具补充有针对性的测试。 基于意图驱动回复且带智能限流的时序图sequenceDiagram
actor User
participant Lark as LarkServer
participant ChatOp as ChatMsgOperator
participant IntentOp as IntentRecognizeOperator
participant Decider as RateLimitDecider
participant Limiter as SmartRateLimiter
participant ChatFlow as ChatFlowExecutor
User->>Lark: Send message
Lark->>ChatOp: Deliver P2MessageReceiveV1
ChatOp->>IntentOp: Fetch intent analysis
IntentOp-->>ChatOp: IntentAnalysis analysis
ChatOp->>Decider: DecideIntentReply(ctx, chatID, openID, analysis)
Decider->>IntentAnalysis: Sanitize()
alt Direct or mention scenario
Decider->>Decider: triggerTypeFromIntentAnalysis
Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeMention}
else Passive or random scenario
Decider->>Limiter: Allow(ctx, chatID, triggerType)
Limiter-->>Decider: allowed, reason
alt Allowed and score >= threshold
Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
else Allowed but score < threshold
Decider->>Limiter: GetStats(ctx, chatID)
Limiter-->>Decider: *ChatStats
Decider->>Decider: calculateIntentReplyScore
Decider->>Decider: adjustIntentFallbackRate
Decider->>Decider: randomFloat64()
alt random pass
Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
else random reject
Decider-->>ChatOp: Decision{Allowed:false, TriggerTypeIntent}
end
end
end
alt Decision.Allowed == true
ChatOp->>Decider: RecordReply(ctx, chatID, decision.TriggerType)
Decider->>Limiter: RecordSend
Limiter-->>Decider: ok
ChatOp->>ChatFlow: Run standard or agentic chat
ChatFlow-->>User: Bot reply message
else Decision.Allowed == false
ChatOp-->>Lark: No reply (return)
end
使用
|
| Change | Details | Files |
|---|---|---|
| 将标准对话流的提示词构造重构为由代码驱动、支持模式感知(直接 vs 环境/旁观),并注入机器人身份和受限的 JSON 回复约定。 |
|
internal/application/lark/agentruntime/chatflow/standard_plan.gointernal/application/lark/handlers/chat_handler.gointernal/application/lark/agentruntime/chatflow/standard_plan_test.gointernal/application/lark/handlers/chat_handler_test.go |
| 引入机器人身份模块,并将机器人自我 Profile 传播到提示词和历史归一化中。 |
|
internal/application/lark/botidentity/profile.gointernal/application/lark/botidentity/profile_test.gointernal/application/lark/agentruntime/chatflow/agentic_plan.gointernal/application/lark/agentruntime/chatflow/agentic_plan_test.gointernal/application/lark/agentruntime/reply_turn.gointernal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.gointernal/application/lark/history/msg.gointernal/application/lark/history/msg_test.gointernal/application/lark/history/utils.gointernal/application/lark/history/search.gointernal/application/lark/history/search_test.go |
增强历史搜索能力,并通过更丰富的 search_history 工具对外暴露。 |
|
internal/application/lark/history/search.gointernal/application/lark/handlers/history_search_handler.gointernal/application/lark/handlers/history_search_handler_test.gointernal/application/lark/handlers/tools_test.go |
| 扩展意图识别 schema,新增回复模式、用户意愿、中断风险和检索提示,并将其接入限流和日志。 |
|
internal/application/lark/intentmeta/types.gointernal/application/lark/intent/recognizer.gointernal/application/lark/intent/recognizer_test.gointernal/application/lark/messages/ops/intent_recognize_op.gointernal/application/lark/messages/ops/intent_recognize_op_test.go |
| 优化智能限流,使其区分触发类型、引入意图级回复评分,并在决策中跟踪触发类型。 |
|
internal/application/lark/ratelimit/integration.gointernal/application/lark/ratelimit/rate_limiter.gointernal/application/lark/ratelimit/rate_limiter_test.go |
| 收紧回复和 Agentic 提示,减少拟人化,并在整个运行时中规范 @ 提及的使用。 |
|
internal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.gointernal/application/lark/agentruntime/capability/default_reply_planner.gointernal/application/lark/agentruntime/capability/default_reply_planner_test.gointernal/application/lark/agentruntime/chatflow/agentic_plan.gointernal/application/lark/agentruntime/chatflow/agentic_plan_test.gointernal/application/lark/handlers/send_message_handler.gointernal/application/lark/handlers/send_message_handler_test.go |
| 修复机器人消息记录逻辑,在历史中持久化真实机器人 OpenID,同时仍以“你”呈现说话人。 |
|
internal/infrastructure/lark_dal/larkmsg/record.gointernal/infrastructure/lark_dal/larkmsg/record_test.gointernal/application/lark/history/msg.go |
| 调整若干基础设施默认值和测试,使其与新行为保持一致。 |
|
internal/infrastructure/ark_dal/responses.gointernal/infrastructure/lark_dal/larkmsg/streaming_agentic.gopkg/xcommand/typed.gointernal/application/lark/agentruntime/reply_turn.go |
Tips and commands
Interacting with Sourcery
-
触发新一轮 Review: 在 Pull Request 中评论
@sourcery-ai review。 -
继续讨论: 直接回复 Sourcery 的 Review 评论即可继续对话。
-
从 Review 评论生成 GitHub Issue: 在某条 Review 评论下回复,请求 Sourcery 从该评论创建 Issue。你也可以直接回复
@sourcery-ai issue来生成 Issue。 -
生成 Pull Request 标题: 在 Pull Request 标题中任意位置写上
@sourcery-ai,即可随时生成标题。也可以在 PR 中评论@sourcery-ai title来(重新)生成标题。 -
生成 Pull Request 摘要: 在 Pull Request 描述正文任意位置写上
@sourcery-ai summary,即可在该位置生成 PR 摘要。也可以评论@sourcery-ai summary随时(重新)生成摘要。 -
生成 Reviewer's Guide: 在 Pull Request 中评论
@sourcery-ai guide,即可随时(重新)生成 Reviewer’s Guide。 -
一次性解决所有 Sourcery 评论: 在 Pull Request 中评论
@sourcery-ai resolve,将所有 Sourcery 评论标记为已解决。适用于你已经处理完所有评论且不希望再看到它们时。 -
清除所有 Sourcery Review: 评论
@sourcery-ai dismiss,可以将所有现有的 Sourcery Review 标记为已忽略。若想重新开始一轮全新的 Review,别忘了再评论一次@sourcery-ai review。
Customizing Your Experience
访问你的 dashboard 以:
- 启用或禁用诸如 Sourcery 自动生成 PR 摘要、Reviewer’s Guide 等 Review 功能。
- 修改 Review 所使用的语言。
- 添加、移除或编辑自定义 Review 说明。
- 调整其他 Review 相关设置。
Getting Help
Original review guide in English
Reviewer's Guide
Refactors standard and agentic chat flows to use code-defined prompts with explicit bot identity, enhances history search and mention normalization, extends intent analysis metadata and integrates it with a smarter rate limiter, fixes bot OpenID persistence in message history, and adds targeted tests around these behaviors and new tools.
Sequence diagram for intent-driven reply with smart rate limiting
sequenceDiagram
actor User
participant Lark as LarkServer
participant ChatOp as ChatMsgOperator
participant IntentOp as IntentRecognizeOperator
participant Decider as RateLimitDecider
participant Limiter as SmartRateLimiter
participant ChatFlow as ChatFlowExecutor
User->>Lark: Send message
Lark->>ChatOp: Deliver P2MessageReceiveV1
ChatOp->>IntentOp: Fetch intent analysis
IntentOp-->>ChatOp: IntentAnalysis analysis
ChatOp->>Decider: DecideIntentReply(ctx, chatID, openID, analysis)
Decider->>IntentAnalysis: Sanitize()
alt Direct or mention scenario
Decider->>Decider: triggerTypeFromIntentAnalysis
Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeMention}
else Passive or random scenario
Decider->>Limiter: Allow(ctx, chatID, triggerType)
Limiter-->>Decider: allowed, reason
alt Allowed and score >= threshold
Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
else Allowed but score < threshold
Decider->>Limiter: GetStats(ctx, chatID)
Limiter-->>Decider: *ChatStats
Decider->>Decider: calculateIntentReplyScore
Decider->>Decider: adjustIntentFallbackRate
Decider->>Decider: randomFloat64()
alt random pass
Decider-->>ChatOp: Decision{Allowed:true, TriggerTypeIntent}
else random reject
Decider-->>ChatOp: Decision{Allowed:false, TriggerTypeIntent}
end
end
end
alt Decision.Allowed == true
ChatOp->>Decider: RecordReply(ctx, chatID, decision.TriggerType)
Decider->>Limiter: RecordSend
Limiter-->>Decider: ok
ChatOp->>ChatFlow: Run standard or agentic chat
ChatFlow-->>User: Bot reply message
else Decision.Allowed == false
ChatOp-->>Lark: No reply (return)
end
Sequence diagram for LLM using search_history tool
sequenceDiagram
participant LLM as LLMModel
participant Ark as ResponsesImpl
participant Tool as search_history_handler
participant History as HistorySearcher
participant OS as OpenSearch
LLM->>Ark: Request completion with tools
Ark->>LLM: Tool call request(search_history, args)
LLM-->>Ark: Tool invocation arguments
Ark->>Tool: Handle(ctx, event, metaData, HistorySearchArgs)
Tool->>History: HybridSearch(ctx, HybridSearchRequest, EmbeddingFunc)
History->>History: jieba tokenization, embedding calls
History->>OS: SearchData(index, query)
OS-->>History: Search hits
History->>History: normalizeSearchResultActor, ReplaceMentionToName
History-->>Tool: []*SearchResult
Tool->>metaData: SetExtra(search_result, json)
Tool-->>Ark: success
Ark-->>LLM: Tool result(json)
LLM->>Ark: Final reply
Ark-->>LLM: Completion
Class diagram for botidentity profile and prompt identity injection
classDiagram
class Identity {
+string AppID
+string BotOpenID
+bool Valid()
+string NamespaceKey(prefix string, key string)
}
class Profile {
+string AppID
+string BotOpenID
+string BotName
}
class botidentity_package {
+Identity Current()
+Profile CurrentProfile(ctx context.Context)
+[]string PromptIdentityLines(profile Profile)
+Profile getProfileCache(ctx context.Context, identity Identity)
+Profile resolveProfile(ctx context.Context, identity Identity)
+string profileCacheKey(identity Identity)
+Profile fallbackProfile(identity Identity)
+Profile normalizeProfile(identity Identity, profile Profile)
+Profile mergeProfile(base Profile, override Profile)
+string configuredBotName()
+Profile loadProfileFromLark(ctx context.Context, identity Identity)
+string applicationName(app *larkapplication.Application)
+string applicationI18nName(items []*larkapplication.AppI18nInfo, wantKey string)
+string pointerString(value *string)
+string firstNonEmpty(values ...string)
<<package>>
-func profileLoader(ctx context.Context, identity Identity) (Profile, error)
}
class cache_package {
+T GetOrExecute(ctx context.Context, key string, fn func() (T, error))
<<external>>
}
class infraConfig_package {
+*Config Get()
<<external>>
}
class Config {
+*BaseInfo BaseInfo
}
class BaseInfo {
+string RobotName
}
class lark_dal_package {
+Client Client
<<external>>
}
class Client {
+*ApplicationService Application
}
class ApplicationService {
+*ApplicationV6 V6
}
class ApplicationV6 {
+*ApplicationResource Application
}
class ApplicationResource {
+Get(ctx context.Context, req *larkapplication.GetApplicationReq) (*larkapplication.GetApplicationResp, error)
}
class ApplicationResp {
+bool Success()
+error Error()
+*ApplicationData Data
}
class ApplicationData {
+*larkapplication.Application App
}
Identity <.. botidentity_package : uses
Profile <.. botidentity_package : constructs
botidentity_package --> cache_package : uses cache
botidentity_package --> infraConfig_package : reads config
botidentity_package --> lark_dal_package : calls Client
lark_dal_package --> Client
Client --> ApplicationService
ApplicationService --> ApplicationV6
ApplicationV6 --> ApplicationResource
ApplicationResource --> ApplicationResp
ApplicationResp --> ApplicationData
ApplicationData --> larkapplication.Application
botidentity_package ..> logs : logging
botidentity_package ..> context : ctx
%% Prompt usage in other modules
class StandardChatPromptBuilder {
+string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
}
class AgenticChatPromptBuilder {
+string buildAgenticChatUserPrompt(ctx agenticChatPromptContext)
}
class ContinuationReplyPromptBuilder {
+string buildContinuationReplyTurnUserPrompt(req ContinuationReplyTurnRequest, selfProfile botidentity.Profile)
}
StandardChatPromptBuilder ..> Profile : embeds identity lines
AgenticChatPromptBuilder ..> Profile : embeds identity lines
ContinuationReplyPromptBuilder ..> Profile : embeds identity lines
StandardChatPromptBuilder ..> botidentity_package : PromptIdentityLines
AgenticChatPromptBuilder ..> botidentity_package : PromptIdentityLines
ContinuationReplyPromptBuilder ..> botidentity_package : PromptIdentityLines
Class diagram for intent analysis and smart rate limiting
classDiagram
class ReplyMode {
<<enum>>
+ReplyModeDirect
+ReplyModePassiveReply
+ReplyModeActiveInterject
+ReplyModeIgnore
}
class SuggestAction {
<<enum>>
+SuggestActionChat
+SuggestActionReact
+SuggestActionRepeat
+SuggestActionIgnore
}
class InteractionMode {
<<enum>>
+InteractionModeStandard
+InteractionModeAgentic
}
class IntentType {
<<enum>>
+IntentTypeQuestion
+IntentTypeIgnore
.. other values ..
}
class IntentAnalysis {
+string IntentType
+bool NeedReply
+int ReplyConfidence
+string Reason
+SuggestAction SuggestAction
+InteractionMode InteractionMode
+ReplyMode ReplyMode
+int UserWillingness
+int InterruptRisk
+bool NeedsHistory
+bool NeedsWeb
+responses.ReasoningEffort_Enum ReasoningEffort
+UnmarshalJSON(data []byte) error
+Sanitize()
}
class TriggerType {
<<enum>>
+TriggerTypeIntent
+TriggerTypeRandom
+TriggerTypeReaction
+TriggerTypeRepeat
+TriggerTypeMention
}
class Decision {
+bool Allowed
+string Reason
+bool ShouldRetry
+int CooldownSec
+TriggerType TriggerType
}
class ChatStats {
+float64 CurrentActivityScore
+float64 CurrentBurstFactor
+int64 TotalMessages1h
+int64 TotalMessages24h
}
class SmartRateLimiter {
-*Config config
-*redis.Client rdb
-*Metrics metrics
-map[string]*ChatStats localCache
-sync.RWMutex localCacheMu
-map[TriggerType]float64 hardTriggerWeights
-map[TriggerType]float64 softTriggerWeights
+Allow(ctx context.Context, chatID string, triggerType TriggerType) (bool, string)
+RecordSend(ctx context.Context, chatID string, triggerType TriggerType)
+*ChatStats GetStats(ctx context.Context, chatID string)
+float64 getSoftLoadMaxPerHour(stats *ChatStats)
+float64 countMessagesInWindow(recentSends []SendRecord, now time.Time, window time.Duration, weights map[TriggerType]float64)
+float64 hardWeight(triggerType TriggerType)
+updateDerivedStats(stats *ChatStats, hourly [24]HourlyStats, recentSends []SendRecord, now time.Time)
+float64 calculateBurstFactor(recentSends []SendRecord, now time.Time)
+float64 getAdjustedMinInterval(stats *ChatStats, triggerType TriggerType)
+int getAdjustedMaxPerHour(stats *ChatStats)
+int getAdjustedMaxPerDay(stats *ChatStats)
}
class Decider {
- *SmartRateLimiter limiter
- func(ctx context.Context, chatID string, openID string) int intentReplyThreshold
- func(ctx context.Context, chatID string, openID string) int intentFallbackRate
- func() float64 randomFloat64
+NewDecider(limiter *SmartRateLimiter) *Decider
+*Decision DecideIntentReply(ctx context.Context, chatID string, openID string, analysis *intent.IntentAnalysis)
+*Decision DecideRandomReply(ctx context.Context, chatID string, baseProbability float64, analysis *IntentAnalysis)
+void RecordReply(ctx context.Context, chatID string, triggerType TriggerType)
+int getIntentReplyThreshold(ctx context.Context, chatID string, openID string)
+int getIntentFallbackRate(ctx context.Context, chatID string, openID string)
}
class IntentRecognizer {
+*IntentAnalysis Analyze(ctx context.Context, message string, modelID string)
}
class IntentRecognizeOperator {
+Fetch(ctx context.Context, event *larkim.P2MessageReceiveV1, meta *xhandler.BaseMetaData) error
-bool shouldForceDirectReplyMode(event *larkim.P2MessageReceiveV1, observation agentruntime.ShadowObservation, observed bool)
-*intent.IntentAnalysis continuationIntentAnalysis(observation agentruntime.ShadowObservation)
}
class ChatMsgOperator {
+Run(ctx context.Context, event *larkim.P2MessageReceiveV1, meta *xhandler.BaseMetaData) error
}
class ConfigProvider {
+int GetIntentReplyThreshold(ctx context.Context, chatID string, openID string)
+int GetIntentFallbackRate(ctx context.Context, chatID string, openID string)
<<external>>
}
IntentAnalysis --> ReplyMode : uses
IntentAnalysis --> SuggestAction : uses
IntentAnalysis --> InteractionMode : uses
Decider --> SmartRateLimiter : uses limiter
Decider --> IntentAnalysis : reads fields
Decider --> Decision : returns
Decider --> ChatStats : uses for activity
Decider ..> ConfigProvider : via intentReplyThreshold, intentFallbackRate
SmartRateLimiter --> ChatStats : maintains
SmartRateLimiter --> TriggerType : weighted
IntentRecognizeOperator --> IntentAnalysis : populates
IntentRecognizeOperator --> IntentRecognizer : calls LLM
IntentRecognizeOperator --> Decider : indirectly via ChatMsgOperator
ChatMsgOperator --> Decider : DecideIntentReply
ChatMsgOperator ..> IntentAnalysis : from metadata
class HelperFunctions {
+TriggerType triggerTypeFromIntentAnalysis(analysis *intent.IntentAnalysis)
+float64 calculateIntentReplyScore(analysis *intent.IntentAnalysis)
+float64 adjustIntentFallbackRate(fallbackRate float64, stats *ChatStats, analysis *intent.IntentAnalysis, threshold int, score float64)
+bool hasHybridSearchSelector(req HybridSearchRequest)
}
HelperFunctions ..> IntentAnalysis
HelperFunctions ..> ChatStats
HelperFunctions ..> TriggerType
class GlobalDecider {
+*Decider GetDecider()
}
GlobalDecider --> Decider
Class diagram for history search and search_history tool
classDiagram
class SearchResult {
+string MessageID
+string OpenID
+string RawMessage
+string UserName
+string ChatName
+string CreateTime
+string Mentions
}
class HybridSearchRequest {
+[]string QueryText
+int TopK
+string OpenID
+string UserName
+string ChatID
+string MessageType
+string StartTime
+string EndTime
}
class HistorySearcher {
+[]*SearchResult HybridSearch(ctx context.Context, req HybridSearchRequest, embeddingFunc EmbeddingFunc) ([]*SearchResult, error)
+map[string]any buildHybridSearchQuery(req HybridSearchRequest, queryTerms []string, queryVecList []map[string]any, now time.Time) (map[string]any, error)
+[]map[string]any buildHybridSearchFilters(req HybridSearchRequest, now time.Time) ([]map[string]any, error)
+bool hasHybridSearchSelector(req HybridSearchRequest)
+void normalizeSearchResultActor(result *SearchResult)
+gresult.R[time.Time] parseTimeFormat(s string, fmt string)
}
class EmbeddingFunc {
<<interface>>
+vector []float32
+tokenUsage model.Usage
+error err
}
class HistorySearchArgs {
+string Keywords
+int TopK
+string StartTime
+string EndTime
+string OpenID
+string UserName
+string MessageType
}
class historySearchHandler {
+ParseTool(raw string) (HistorySearchArgs, error)
+ToolSpec() xcommand.ToolSpec
+Handle(ctx context.Context, data *larkim.P2MessageReceiveV1, metaData *xhandler.BaseMetaData, arg HistorySearchArgs) error
<<tool handler>>
-func splitByComma(text string) []string
}
class ToolSpec {
+string Name
+string Desc
+*arktools.Param Params
-ToolResultEncoder Result
}
class arktools_Param {
+NewParams(kind string) *arktools.Param
+AddProp(name string, prop *arktools.Prop) *arktools.Param
+AddRequired(name string) *arktools.Param
}
class xhandler_BaseMetaData {
+string ChatID
+string GetExtra(key string) (string, bool)
+void SetExtra(key string, value string)
}
class opensearchClient {
+SearchData(ctx context.Context, index string, query map[string]any) (*opensearchapi.SearchResp, error)
}
class botidentity_Current {
+Identity Current()
<<external>>
}
HistorySearcher --> SearchResult : returns
HistorySearcher --> HybridSearchRequest : input
HistorySearcher ..> EmbeddingFunc : dependency
HistorySearcher ..> opensearchClient : queries index
HistorySearcher ..> botidentity_Current : for OpenID normalization
historySearchHandler --> HistorySearchArgs
historySearchHandler --> ToolSpec
historySearchHandler ..> arktools_Param : build schema
historySearchHandler ..> HistorySearcher : calls HybridSearch via historyHybridSearchFn
historySearchHandler ..> xhandler_BaseMetaData : reads ChatID, writes search_result
ToolSpec --> arktools_Param
class historyHybridSearchFnVar {
-func(ctx context.Context, req HybridSearchRequest, embeddingFunc EmbeddingFunc) ([]*SearchResult, error) historyHybridSearchFn
}
historyHybridSearchFnVar ..> HistorySearcher
Class diagram for standard and agentic chat prompt builders
classDiagram
class standardPromptMode {
<<enum>>
+standardPromptModeDirect
+standardPromptModeAmbient
}
class StandardPlanBuilder {
+BuildInitialChatExecutionPlan(ctx context.Context, req InitialChatGenerationPlanRequest) (InitialChatExecutionPlan, error)
-standardPromptMode resolveStandardPromptMode(event *larkim.P2MessageReceiveV1, replyScoped bool)
-int standardPromptHistoryLimit(mode standardPromptMode, requested int)
-string buildStandardChatSystemPrompt(mode standardPromptMode)
-string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
-string standardChatLinesBlock(lines []string)
-string pointerString(value *string)
-string defaultInitialChatUserNameLoader(ctx context.Context, chatID string, openID string)
}
class InitialChatExecutionPlan {
+*larkim.P2MessageReceiveV1 Event
+string ChatID
+string OpenID
+string Prompt
+string UserInput
+[]string Files
+*arktools.Impl Tools
+history.MessageList MessageList
}
class ChatHandler {
+runStandardChat(ctx context.Context, event *larkim.P2MessageReceiveV1, chatType string, size *int, args ...string) error
-standardPromptMode resolveStandardPromptMode(event *larkim.P2MessageReceiveV1)
-int standardPromptHistoryLimit(mode standardPromptMode, requested int)
-string buildStandardChatSystemPrompt(mode standardPromptMode)
-string buildStandardChatUserPrompt(selfProfile botidentity.Profile, historyLines []string, contextLines []string, currentInput string)
-string standardChatLinesBlock(lines []string)
-string pointerString(value *string)
}
class agenticChatPromptContext {
+string UserRequest
+botidentity.Profile SelfProfile
+[]string HistoryLines
+[]string ContextLines
+[]string Topics
+bool ReplyScoped
}
class AgenticPlanBuilder {
+buildAgenticChatExecutionPlan(ctx context.Context, req AgenticChatGenerationPlanRequest) (AgenticChatExecutionPlan, error)
-buildAgenticChatPromptContext(ctx context.Context, ...) (history.MessageList, agenticChatPromptContext, error)
-string agenticChatSystemPrompt()
-string buildAgenticChatUserPrompt(ctx agenticChatPromptContext)
-[]string agenticChatStyleHints(ctx agenticChatPromptContext)
-string agenticChatLinesBlock(lines []string)
}
class ContinuationReplyTurnExecutor {
+ExecuteContinuationReplyTurn(ctx context.Context, req ContinuationReplyTurnRequest) error
-string continuationReplyTurnSystemPrompt()
-string buildContinuationReplyTurnUserPrompt(req ContinuationReplyTurnRequest, selfProfile botidentity.Profile)
-string replyTurnLinesBlock(lines []string)
}
StandardPlanBuilder --> InitialChatExecutionPlan
StandardPlanBuilder ..> botidentity.Profile : uses
ChatHandler ..> botidentity.Profile : standardChatBotProfileLoader
AgenticPlanBuilder --> agenticChatPromptContext
AgenticPlanBuilder ..> botidentity.Profile : SelfProfile
ContinuationReplyTurnExecutor ..> botidentity.Profile : continuationReplyBotProfileLoader
class ark_dal_ResponsesImpl {
+Do(ctx context.Context, sysPrompt string, userPrompt string, files ...string) (iter.Seq[string], error)
}
ark_dal_ResponsesImpl ..> InitialChatExecutionPlan : consumes Prompt, UserInput
ark_dal_ResponsesImpl ..> AgenticChatExecutionPlan : consumes
ark_dal_ResponsesImpl ..> ContinuationReplyTurnExecutor : continuation prompts
File-Level Changes
| Change | Details | Files |
|---|---|---|
| Refactor standard chat flow prompt building to be code-driven, mode-aware (direct vs ambient), and inject bot identity plus constrained JSON reply contract. |
|
internal/application/lark/agentruntime/chatflow/standard_plan.gointernal/application/lark/handlers/chat_handler.gointernal/application/lark/agentruntime/chatflow/standard_plan_test.gointernal/application/lark/handlers/chat_handler_test.go |
| Introduce a bot identity module and propagate bot self-profile into prompts and history normalization. |
|
internal/application/lark/botidentity/profile.gointernal/application/lark/botidentity/profile_test.gointernal/application/lark/agentruntime/chatflow/agentic_plan.gointernal/application/lark/agentruntime/chatflow/agentic_plan_test.gointernal/application/lark/agentruntime/reply_turn.gointernal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.gointernal/application/lark/history/msg.gointernal/application/lark/history/msg_test.gointernal/application/lark/history/utils.gointernal/application/lark/history/search.gointernal/application/lark/history/search_test.go |
| Enhance history search capabilities and expose them via a richer search_history tool. |
|
internal/application/lark/history/search.gointernal/application/lark/handlers/history_search_handler.gointernal/application/lark/handlers/history_search_handler_test.gointernal/application/lark/handlers/tools_test.go |
| Extend intent recognition schema to include reply mode, user willingness, interrupt risk, and retrieval hints, and wire these into rate limiting and logging. |
|
internal/application/lark/intentmeta/types.gointernal/application/lark/intent/recognizer.gointernal/application/lark/intent/recognizer_test.gointernal/application/lark/messages/ops/intent_recognize_op.gointernal/application/lark/messages/ops/intent_recognize_op_test.go |
| Refine smart rate limiting to distinguish trigger types, incorporate intent-level reply scoring, and track trigger type in decisions. |
|
internal/application/lark/ratelimit/integration.gointernal/application/lark/ratelimit/rate_limiter.gointernal/application/lark/ratelimit/rate_limiter_test.go |
| Tighten reply and agentic prompts to reduce anthropomorphism and guide mention usage across the runtime. |
|
internal/application/lark/agentruntime/default_continuation_reply_turn_executor_test.gointernal/application/lark/agentruntime/capability/default_reply_planner.gointernal/application/lark/agentruntime/capability/default_reply_planner_test.gointernal/application/lark/agentruntime/chatflow/agentic_plan.gointernal/application/lark/agentruntime/chatflow/agentic_plan_test.gointernal/application/lark/handlers/send_message_handler.gointernal/application/lark/handlers/send_message_handler_test.go |
| Fix bot message recording to persist the real bot OpenID while still presenting the speaker as '你' in history. |
|
internal/infrastructure/lark_dal/larkmsg/record.gointernal/infrastructure/lark_dal/larkmsg/record_test.gointernal/application/lark/history/msg.go |
| Adjust miscellaneous infrastructure defaults and tests to align with the new behavior. |
|
internal/infrastructure/ark_dal/responses.gointernal/infrastructure/lark_dal/larkmsg/streaming_agentic.gopkg/xcommand/typed.gointernal/application/lark/agentruntime/reply_turn.go |
Tips and commands
Interacting with Sourcery
- Trigger a new review: Comment
@sourcery-ai reviewon the pull request. - Continue discussions: Reply directly to Sourcery's review comments.
- Generate a GitHub issue from a review comment: Ask Sourcery to create an
issue from a review comment by replying to it. You can also reply to a
review comment with@sourcery-ai issueto create an issue from it. - Generate a pull request title: Write
@sourcery-aianywhere in the pull
request title to generate a title at any time. You can also comment
@sourcery-ai titleon the pull request to (re-)generate the title at any time. - Generate a pull request summary: Write
@sourcery-ai summaryanywhere in
the pull request body to generate a PR summary at any time exactly where you
want it. You can also comment@sourcery-ai summaryon the pull request to
(re-)generate the summary at any time. - Generate reviewer's guide: Comment
@sourcery-ai guideon the pull
request to (re-)generate the reviewer's guide at any time. - Resolve all Sourcery comments: Comment
@sourcery-ai resolveon the
pull request to resolve all Sourcery comments. Useful if you've already
addressed all the comments and don't want to see them anymore. - Dismiss all Sourcery reviews: Comment
@sourcery-ai dismisson the pull
request to dismiss all existing Sourcery reviews. Especially useful if you
want to start fresh with a new review - don't forget to comment
@sourcery-ai reviewto trigger a new review!
Customizing Your Experience
Access your dashboard to:
- Enable or disable review features such as the Sourcery-generated pull request
summary, the reviewer's guide, and others. - Change the review language.
- Add, remove or edit custom review instructions.
- Adjust other review settings.
Getting Help
- Contact our support team for questions or feedback.
- Visit our documentation for detailed guides and information.
- Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.
There was a problem hiding this comment.
Hey - 我发现了 4 个问题,并给出了一些整体反馈:
- 标准 Prompt 帮助方法(standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock)现在同时在 chatflow/standard_plan.go 和 handlers/chat_handler.go 中实现;建议将它们抽取到一个共享的包/函数集中,以避免两个代码路径随着时间推移出现分叉。
- 混合历史搜索现在会拒绝没有任何 selector(keywords/user_id/user_name/message_type/时间范围)的请求,但 search_history tool 的定义中是明确允许空 keywords 的;你可能需要要么放宽 hasHybridSearchSelector 的约束,要么在工具的 handler/描述中增加明确的参数校验/说明,以免 LLM 在调用时省略所有过滤条件时产生意料之外的错误。
给 AI Agent 的提示词
请根据这次代码评审中的评论进行修改:
## 整体评论
- 标准 Prompt 帮助方法(standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock)现在同时在 chatflow/standard_plan.go 和 handlers/chat_handler.go 中实现;建议将它们抽取到一个共享的包/函数集中,以避免两个代码路径随着时间推移出现分叉。
- 混合历史搜索现在会拒绝没有任何 selector(keywords/user_id/user_name/message_type/时间范围)的请求,但 search_history tool 的定义中是明确允许空 keywords 的;你可能需要要么放宽 hasHybridSearchSelector 的约束,要么在工具的 handler/描述中增加明确的参数校验/说明,以免 LLM 在调用时省略所有过滤条件时产生意料之外的错误。
## 单独评论
### Comment 1
<location path="internal/application/lark/ratelimit/integration.go" line_range="68-69" />
<code_context>
}
- allowed, reason := d.limiter.Allow(ctx, chatID, TriggerTypeIntent)
+ triggerType := triggerTypeFromIntentAnalysis(analysis)
+ if triggerType == TriggerTypeMention {
+ return &Decision{
+ Allowed: true,
</code_context>
<issue_to_address>
**issue (bug_risk):** 直达/mention 形式的回复会完全绕过 SmartRateLimiter,这可能在繁忙的群聊中导致无限制的回复量。
在 `DecideIntentReply` 中,当 `triggerTypeFromIntentAnalysis` 返回 `TriggerTypeMention` 时,函数会直接返回 `Allowed: true`,不会调用 `s.limiter.Allow`,也不会做任何负载/冷却检查(`直达消息绕过被动频控`)。由于 `triggerTypeFromIntentAnalysis` 会把 `ReplyModeDirect` 映射为 `TriggerTypeMention`,而 `shouldForceDirectReplyMode` 又会将很多场景(单聊 / 回复机器人 / follow-up)提升为 `ReplyModeDirect`,因此很大一部分流量可以绕过硬/软配额和突发保护。虽然回复会通过 `RecordReply` 记录,但 mention 本身从不会经过 allow 检查,所以单个群聊可以通过不断 @ 机器人来驱动非常高的吞吐量。建议至少将 `TriggerTypeMention` 走一遍 soft-load 路径(或者增加一个专门的上限),即便对部分基于意图的限制有所放宽。
</issue_to_address>
### Comment 2
<location path="internal/application/lark/history/search_test.go" line_range="11-20" />
<code_context>
+ "github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
+)
+
+func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
+ _, err := buildHybridSearchQuery(
+ HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
+ []string{"机器人"},
+ nil,
+ time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
+ )
+ if err == nil {
+ t.Fatal("expected chat_id requirement error")
+ }
+}
+
+func TestBuildHybridSearchQueryIncludesMetadataFilters(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** 增加一个测试,用于覆盖当 chat_id 存在但未提供任何 selector 时 `hasHybridSearchSelector` 的约束。
新逻辑会在 `chat_id` 已设置但没有提供任何 selector(没有非空的 query、user_id、user_name、message_type、start_time、end_time)时返回错误。请增加一个测试,使用一个 `ChatID` 非空、其他字段为空的 `HybridSearchRequest`,并断言 `buildHybridSearchQuery`/`buildHybridSearchFilters` 会返回预期的 "at least one history selector is required" 错误,从而让这个用于防止无界历史扫描的保护逻辑被测试覆盖到。
建议实现:
```golang
import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)
```
```golang
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
[]string{"机器人"},
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected chat_id requirement error")
}
}
func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{
ChatID: "oc_test_chat",
},
nil,
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected history selector requirement error when chat_id is set")
}
if !strings.Contains(err.Error(), "at least one history selector is required") {
t.Fatalf("unexpected error: %v", err)
}
}
```
</issue_to_address>
### Comment 3
<location path="internal/application/lark/ratelimit/rate_limiter_test.go" line_range="301-264" />
<code_context>
+func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** 为 `DecideIntentReply` 在 `analysis` 为 nil 和 `NeedReply` 为 false 的情况补充测试覆盖。
新的 `Decider.DecideIntentReply` 有两条未测试的早返回路径,对安全性很重要:
- `analysis == nil` 时应返回 `Allowed=false`、reason 为 `"意图分析缺失"`,以及期望的 `TriggerType` 零值。
- `analysis.NeedReply == false` 时应在任何 limiter 调用前短路返回,并在 `Decision` 中给出清楚表明“不需要回复”的 reason。
请增加一个小型表驱动测试(或两个聚焦测试),覆盖这些场景,并断言返回结果,同时在 `NeedReply == false` 的路径上断言 limiter 不会被调用,以防在这些退化场景中出现回归。
建议实现:
```golang
func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
s, rdb := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0.0,
CooldownBaseSeconds: 1.0,
MaxCooldownSeconds: 10.0,
```
我目前只能看到 `TestDeciderUsesConfigDrivenThresholdAndReplyMode` 的开头,所以会描述应当在 `rate_limiter_test.go` 底部(现有 decider 测试之后,复用文件/包中已有的 helper 和类型)追加的额外测试:
1. **添加一个用于断言 limiter 未被调用的 fake limiter:**
```go
type noopLimiter struct {
called bool
}
func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
n.called = true
return true, ""
}
```
如果你现有的 `Limiter` 接口方法签名不同,请做相应调整。
2. **测试 `analysis == nil` 时返回安全兜底的决策:**
```go
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0,
CooldownBaseSeconds: 1,
MaxCooldownSeconds: 10,
})
decider := NewDecider(limiter /* plus any other dependencies you already pass here */)
decision := decider.DecideIntentReply(ctx, "chat-1", nil)
if decision.Allowed {
t.Fatalf("expected Allowed=false when analysis is nil, got true")
}
if decision.Reason != "意图分析缺失" {
t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
}
if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
}
}
```
根据你的实际类型调整 `NewDecider` 的入参以及决策结构体的字段名(例如 `Decision.TriggerType` vs `Decision.Trigger`)。
3. **测试 `analysis.NeedReply == false` 时会短路而且不会调用 limiter:**
```go
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
nl := &noopLimiter{}
decider := NewDecider(nl /* plus any other dependencies you already pass here */)
analysis := &IntentAnalysis{
NeedReply: false,
// fill any other required fields to produce a valid struct
}
decision := decider.DecideIntentReply(ctx, "chat-2", analysis)
if nl.called {
t.Fatalf("expected limiter not to be called when NeedReply=false")
}
if decision.Allowed {
t.Fatalf("expected Allowed=false when NeedReply=false, got true")
}
if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
}
}
```
4. **imports / helper:**
- 确保在测试文件顶部引入 `strings`:
```go
import (
"context"
"strings"
"testing"
// ...existing imports
)
```
- 如果你的 `Decision` 或 `IntentAnalysis` 类型在其他包中(如 `ratelimit` 或 `lark`),请相应加上包名前缀(例如 `ratelimit.IntentAnalysis`)。
这些补充测试可以完整覆盖 `Decider.DecideIntentReply` 的两条早返回安全路径,并同时断言返回结果以及在 `NeedReply == false` 时 limiter 不会被调用。
</issue_to_address>
### Comment 4
<location path="internal/application/lark/botidentity/profile_test.go" line_range="22-20" />
<code_context>
+func TestGetProfileCacheCachesByIdentity(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** 考虑为 `PromptIdentityLines` 增加一个聚焦测试,用来锁定 identity prompt 的约定。
由于很多新的 prompt 测试(standard、agentic、continuation)都是通过更高层的构造函数间接地测试 `PromptIdentityLines`,增加一个直接的单元测试会有助于明确它的契约。例如,添加一个测试:
- 调用 `PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"})`。
- 断言返回的切片中包含预期的 `self_open_id`、`self_app_id`、`self_name` 条目,以及关于 sender 的 `user_id`/`open_id` 和 mention 目标 `open_id` 的说明性文本行。
这样一来,如果 identity prompt 的格式发生改变,就会有一条单独、明确的测试定义预期行为。
建议实现:
```golang
func useWorkspaceConfigPath(t *testing.T) {
t.Helper()
configPath, err := filepath.Abs("../../../../.dev/config.toml")
if err != nil {
t.Fatalf("resolve config path: %v", err)
}
t.Setenv("BETAGO_CONFIG_PATH", configPath)
}
func TestPromptIdentityLines_BasicContract(t *testing.T) {
profile := Profile{
AppID: "cli_x",
BotOpenID: "ou_bot",
BotName: "BetaGo",
}
lines := PromptIdentityLines(profile)
assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})
// Explanatory line(s) about sender identifiers (user_id/open_id).
assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
// Explanatory line(s) about mention targets by open_id.
assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}
func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
t.Helper()
for _, line := range lines {
matches := true
for _, sub := range substrs {
if !strings.Contains(line, sub) {
matches = false
break
}
}
if matches {
return
}
}
t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}
func TestGetProfileCacheCachesByIdentity(t *testing.T) {
```
1. 确保在 `internal/application/lark/botidentity/profile_test.go` 顶部的 import 块中加入 `"strings"`。
2. 如果该测试文件中已经有类似 `assertLineContainsAll` 的通用断言 helper,建议复用它而不是新增一个,以保持风格一致。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进之后的评审体验。
Original comment in English
Hey - I've found 4 issues, and left some high level feedback:
- The standard prompt helpers (standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock) are now implemented in both chatflow/standard_plan.go and handlers/chat_handler.go; consider extracting them into a shared package/function set to avoid divergence between the two code paths over time.
- Hybrid history search now rejects requests without any selector (keywords/user_id/user_name/message_type/time range), but the search_history tool spec explicitly allows empty keywords; you may want to either relax hasHybridSearchSelector or add explicit argument validation/clarification in the tool handler/description so LLM calls that omit all filters don’t result in unexpected errors.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The standard prompt helpers (standardPromptMode, standardPromptHistoryLimit, buildStandardChatSystemPrompt, buildStandardChatUserPrompt, standardChatLinesBlock) are now implemented in both chatflow/standard_plan.go and handlers/chat_handler.go; consider extracting them into a shared package/function set to avoid divergence between the two code paths over time.
- Hybrid history search now rejects requests without any selector (keywords/user_id/user_name/message_type/time range), but the search_history tool spec explicitly allows empty keywords; you may want to either relax hasHybridSearchSelector or add explicit argument validation/clarification in the tool handler/description so LLM calls that omit all filters don’t result in unexpected errors.
## Individual Comments
### Comment 1
<location path="internal/application/lark/ratelimit/integration.go" line_range="68-69" />
<code_context>
}
- allowed, reason := d.limiter.Allow(ctx, chatID, TriggerTypeIntent)
+ triggerType := triggerTypeFromIntentAnalysis(analysis)
+ if triggerType == TriggerTypeMention {
+ return &Decision{
+ Allowed: true,
</code_context>
<issue_to_address>
**issue (bug_risk):** Direct/mention-style replies bypass the SmartRateLimiter entirely, which may cause unbounded reply volume in busy chats.
In `DecideIntentReply`, when `triggerTypeFromIntentAnalysis` returns `TriggerTypeMention`, the function returns `Allowed: true` without invoking `s.limiter.Allow` or any load/cooldown checks (`直达消息绕过被动频控`). Because `triggerTypeFromIntentAnalysis` maps `ReplyModeDirect` to `TriggerTypeMention` and `shouldForceDirectReplyMode` upgrades many cases (P2P / reply-to-bot / follow-up) into `ReplyModeDirect`, a large share of traffic can skip hard/soft quotas and burst protection. Replies are logged via `RecordReply`, but mentions themselves are never subject to allow checks, so a single chat can drive very high throughput by continuously pinging the bot. Please consider at least routing `TriggerTypeMention` through the soft-load path (or adding a dedicated upper bound) even if some intent-based limits are relaxed.
</issue_to_address>
### Comment 2
<location path="internal/application/lark/history/search_test.go" line_range="11-20" />
<code_context>
+ "github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
+)
+
+func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
+ _, err := buildHybridSearchQuery(
+ HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
+ []string{"机器人"},
+ nil,
+ time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
+ )
+ if err == nil {
+ t.Fatal("expected chat_id requirement error")
+ }
+}
+
+func TestBuildHybridSearchQueryIncludesMetadataFilters(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test to cover the `hasHybridSearchSelector` constraint when chat_id is present but no selectors are provided.
The new logic returns an error when `chat_id` is set but no selectors are provided (no non-blank query, user_id, user_name, message_type, start_time, end_time). Please add a test that uses a `HybridSearchRequest` with a non-empty `ChatID` and all other fields empty, and asserts that `buildHybridSearchQuery`/`buildHybridSearchFilters` returns the expected "at least one history selector is required" error, so this guard against unbounded history scans is covered by tests.
Suggested implementation:
```golang
import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)
```
```golang
func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
[]string{"机器人"},
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected chat_id requirement error")
}
}
func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{
ChatID: "oc_test_chat",
},
nil,
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected history selector requirement error when chat_id is set")
}
if !strings.Contains(err.Error(), "at least one history selector is required") {
t.Fatalf("unexpected error: %v", err)
}
}
```
</issue_to_address>
### Comment 3
<location path="internal/application/lark/ratelimit/rate_limiter_test.go" line_range="301-264" />
<code_context>
+func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** Add coverage for `DecideIntentReply` when `analysis` is nil and when `NeedReply` is false.
The new `Decider.DecideIntentReply` has two untested early-return paths that are important for safety:
- `analysis == nil` should return `Allowed=false`, reason `"意图分析缺失"`, and the intended zero-value `TriggerType`.
- `analysis.NeedReply == false` should short-circuit before any limiter calls and return a `Decision` whose reason clearly reflects that no reply is needed.
Please add a small table-driven test (or two focused tests) that covers these cases and asserts both the outputs and that the limiter is not consulted in the `NeedReply == false` path, to prevent regressions in these degenerate scenarios.
Suggested implementation:
```golang
func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
s, rdb := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0.0,
CooldownBaseSeconds: 1.0,
MaxCooldownSeconds: 10.0,
```
I can only see the beginning of `TestDeciderUsesConfigDrivenThresholdAndReplyMode`, so I’ll describe the additional tests that should be appended near the bottom of `rate_limiter_test.go` (after the existing decider tests, reusing the same helpers and types you already have in the file/package):
1. **Add a fake limiter that lets us assert it is not called:**
```go
type noopLimiter struct {
called bool
}
func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
n.called = true
return true, ""
}
```
Adjust the type/method signature to match your existing `Limiter` interface if it differs.
2. **Test when `analysis == nil` returns the safety guard decision:**
```go
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0,
CooldownBaseSeconds: 1,
MaxCooldownSeconds: 10,
})
decider := NewDecider(limiter /* plus any other dependencies you already pass here */)
decision := decider.DecideIntentReply(ctx, "chat-1", nil)
if decision.Allowed {
t.Fatalf("expected Allowed=false when analysis is nil, got true")
}
if decision.Reason != "意图分析缺失" {
t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
}
if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
}
}
```
Adjust `NewDecider` arguments and the decision struct field names to match your actual types (e.g. `Decision.TriggerType` vs `Decision.Trigger`).
3. **Test when `analysis.NeedReply == false` short-circuits and does not consult the limiter:**
```go
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
nl := &noopLimiter{}
decider := NewDecider(nl /* plus any other dependencies you already pass here */)
analysis := &IntentAnalysis{
NeedReply: false,
// fill any other required fields to produce a valid struct
}
decision := decider.DecideIntentReply(ctx, "chat-2", analysis)
if nl.called {
t.Fatalf("expected limiter not to be called when NeedReply=false")
}
if decision.Allowed {
t.Fatalf("expected Allowed=false when NeedReply=false, got true")
}
if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
}
}
```
4. **Imports / helpers:**
- Ensure `strings` is imported at the top of the test file:
```go
import (
"context"
"strings"
"testing"
// ...existing imports
)
```
- If your `Decision` or `IntentAnalysis` types live in another package (e.g. `ratelimit` or `lark`), qualify them appropriately (e.g. `ratelimit.IntentAnalysis`).
These additions will fully cover the two early-return safety paths for `Decider.DecideIntentReply` and assert both the outputs and that the limiter is not consulted when `NeedReply == false`.
</issue_to_address>
### Comment 4
<location path="internal/application/lark/botidentity/profile_test.go" line_range="22-20" />
<code_context>
+func TestGetProfileCacheCachesByIdentity(t *testing.T) {
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a focused test for `PromptIdentityLines` to lock in the identity prompt contract.
Since many of the new prompt tests (standard, agentic, continuation) only exercise `PromptIdentityLines` indirectly through higher-level builders, a direct unit test would help pin down its contract. For example, add a test that:
- Calls `PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"})`.
- Asserts the slice includes the expected `self_open_id`, `self_app_id`, `self_name` entries and the explanatory lines about sender `user_id`/`open_id` and mention target `open_id`.
That way, if the identity prompt format changes, there’s a single, explicit test defining the intended behavior.
Suggested implementation:
```golang
func useWorkspaceConfigPath(t *testing.T) {
t.Helper()
configPath, err := filepath.Abs("../../../../.dev/config.toml")
if err != nil {
t.Fatalf("resolve config path: %v", err)
}
t.Setenv("BETAGO_CONFIG_PATH", configPath)
}
func TestPromptIdentityLines_BasicContract(t *testing.T) {
profile := Profile{
AppID: "cli_x",
BotOpenID: "ou_bot",
BotName: "BetaGo",
}
lines := PromptIdentityLines(profile)
assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})
// Explanatory line(s) about sender identifiers (user_id/open_id).
assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
// Explanatory line(s) about mention targets by open_id.
assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}
func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
t.Helper()
for _, line := range lines {
matches := true
for _, sub := range substrs {
if !strings.Contains(line, sub) {
matches = false
break
}
}
if matches {
return
}
}
t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}
func TestGetProfileCacheCachesByIdentity(t *testing.T) {
```
1. Ensure `strings` is imported at the top of `internal/application/lark/botidentity/profile_test.go`, e.g. add `"strings"` to the existing import block.
2. If the test file already defines a generic assertion helper similar to `assertLineContainsAll`, you may want to reuse that instead of introducing a new helper to keep style consistent.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| triggerType := triggerTypeFromIntentAnalysis(analysis) | ||
| if triggerType == TriggerTypeMention { |
There was a problem hiding this comment.
issue (bug_risk): 直达/mention 形式的回复会完全绕过 SmartRateLimiter,这可能在繁忙的群聊中导致无限制的回复量。
在 DecideIntentReply 中,当 triggerTypeFromIntentAnalysis 返回 TriggerTypeMention 时,函数会直接返回 Allowed: true,不会调用 s.limiter.Allow,也不会做任何负载/冷却检查(直达消息绕过被动频控)。由于 triggerTypeFromIntentAnalysis 会把 ReplyModeDirect 映射为 TriggerTypeMention,而 shouldForceDirectReplyMode 又会将很多场景(单聊 / 回复机器人 / follow-up)提升为 ReplyModeDirect,因此很大一部分流量可以绕过硬/软配额和突发保护。虽然回复会通过 RecordReply 记录,但 mention 本身从不会经过 allow 检查,所以单个群聊可以通过不断 @ 机器人来驱动非常高的吞吐量。建议至少将 TriggerTypeMention 走一遍 soft-load 路径(或者增加一个专门的上限),即便对部分基于意图的限制有所放宽。
Original comment in English
issue (bug_risk): Direct/mention-style replies bypass the SmartRateLimiter entirely, which may cause unbounded reply volume in busy chats.
In DecideIntentReply, when triggerTypeFromIntentAnalysis returns TriggerTypeMention, the function returns Allowed: true without invoking s.limiter.Allow or any load/cooldown checks (直达消息绕过被动频控). Because triggerTypeFromIntentAnalysis maps ReplyModeDirect to TriggerTypeMention and shouldForceDirectReplyMode upgrades many cases (P2P / reply-to-bot / follow-up) into ReplyModeDirect, a large share of traffic can skip hard/soft quotas and burst protection. Replies are logged via RecordReply, but mentions themselves are never subject to allow checks, so a single chat can drive very high throughput by continuously pinging the bot. Please consider at least routing TriggerTypeMention through the soft-load path (or adding a dedicated upper bound) even if some intent-based limits are relaxed.
| func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) { | ||
| _, err := buildHybridSearchQuery( | ||
| HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3}, | ||
| []string{"机器人"}, | ||
| nil, | ||
| time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC), | ||
| ) | ||
| if err == nil { | ||
| t.Fatal("expected chat_id requirement error") | ||
| } |
There was a problem hiding this comment.
suggestion (testing): 增加一个测试,用于覆盖当 chat_id 存在但未提供任何 selector 时 hasHybridSearchSelector 的约束。
新逻辑会在 chat_id 已设置但没有提供任何 selector(没有非空的 query、user_id、user_name、message_type、start_time、end_time)时返回错误。请增加一个测试,使用一个 ChatID 非空、其他字段为空的 HybridSearchRequest,并断言 buildHybridSearchQuery/buildHybridSearchFilters 会返回预期的 "at least one history selector is required" 错误,从而让这个用于防止无界历史扫描的保护逻辑被测试覆盖到。
建议实现:
import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
[]string{"机器人"},
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected chat_id requirement error")
}
}
func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{
ChatID: "oc_test_chat",
},
nil,
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected history selector requirement error when chat_id is set")
}
if !strings.Contains(err.Error(), "at least one history selector is required") {
t.Fatalf("unexpected error: %v", err)
}
}Original comment in English
suggestion (testing): Add a test to cover the hasHybridSearchSelector constraint when chat_id is present but no selectors are provided.
The new logic returns an error when chat_id is set but no selectors are provided (no non-blank query, user_id, user_name, message_type, start_time, end_time). Please add a test that uses a HybridSearchRequest with a non-empty ChatID and all other fields empty, and asserts that buildHybridSearchQuery/buildHybridSearchFilters returns the expected "at least one history selector is required" error, so this guard against unbounded history scans is covered by tests.
Suggested implementation:
import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/BetaGoRobot/BetaGo-Redefine/internal/application/lark/botidentity"
)func TestBuildHybridSearchQueryRequiresChatID(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{QueryText: []string{"机器人"}, TopK: 3},
[]string{"机器人"},
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected chat_id requirement error")
}
}
func TestBuildHybridSearchQueryRequiresHistorySelectorWhenChatIDPresent(t *testing.T) {
_, err := buildHybridSearchQuery(
HybridSearchRequest{
ChatID: "oc_test_chat",
},
nil,
nil,
time.Date(2026, 3, 26, 12, 0, 0, 0, time.UTC),
)
if err == nil {
t.Fatal("expected history selector requirement error when chat_id is set")
}
if !strings.Contains(err.Error(), "at least one history selector is required") {
t.Fatalf("unexpected error: %v", err)
}
}| if !allowed { | ||
| t.Fatalf("active interject should be allowed initially: %s", reason) | ||
| } | ||
| } |
There was a problem hiding this comment.
suggestion (testing): 为 DecideIntentReply 在 analysis 为 nil 和 NeedReply 为 false 的情况补充测试覆盖。
新的 Decider.DecideIntentReply 有两条未测试的早返回路径,对安全性很重要:
analysis == nil时应返回Allowed=false、reason 为"意图分析缺失",以及期望的TriggerType零值。analysis.NeedReply == false时应在任何 limiter 调用前短路返回,并在Decision中给出清楚表明“不需要回复”的 reason。
请增加一个小型表驱动测试(或两个聚焦测试),覆盖这些场景,并断言返回结果,同时在 NeedReply == false 的路径上断言 limiter 不会被调用,以防在这些退化场景中出现回归。
建议实现:
func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
s, rdb := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0.0,
CooldownBaseSeconds: 1.0,
MaxCooldownSeconds: 10.0,我目前只能看到 TestDeciderUsesConfigDrivenThresholdAndReplyMode 的开头,所以会描述应当在 rate_limiter_test.go 底部(现有 decider 测试之后,复用文件/包中已有的 helper 和类型)追加的额外测试:
- 添加一个用于断言 limiter 未被调用的 fake limiter:
type noopLimiter struct {
called bool
}
func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
n.called = true
return true, ""
}如果你现有的 Limiter 接口方法签名不同,请做相应调整。
- 测试
analysis == nil时返回安全兜底的决策:
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0,
CooldownBaseSeconds: 1,
MaxCooldownSeconds: 10,
})
decider := NewDecider(limiter /* plus any other dependencies you already pass here */)
decision := decider.DecideIntentReply(ctx, "chat-1", nil)
if decision.Allowed {
t.Fatalf("expected Allowed=false when analysis is nil, got true")
}
if decision.Reason != "意图分析缺失" {
t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
}
if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
}
}根据你的实际类型调整 NewDecider 的入参以及决策结构体的字段名(例如 Decision.TriggerType vs Decision.Trigger)。
- 测试
analysis.NeedReply == false时会短路而且不会调用 limiter:
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
nl := &noopLimiter{}
decider := NewDecider(nl /* plus any other dependencies you already pass here */)
analysis := &IntentAnalysis{
NeedReply: false,
// fill any other required fields to produce a valid struct
}
decision := decider.DecideIntentReply(ctx, "chat-2", analysis)
if nl.called {
t.Fatalf("expected limiter not to be called when NeedReply=false")
}
if decision.Allowed {
t.Fatalf("expected Allowed=false when NeedReply=false, got true")
}
if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
}
}- imports / helper:
- 确保在测试文件顶部引入
strings:
import (
"context"
"strings"
"testing"
// ...existing imports
)- 如果你的
Decision或IntentAnalysis类型在其他包中(如ratelimit或lark),请相应加上包名前缀(例如ratelimit.IntentAnalysis)。
这些补充测试可以完整覆盖 Decider.DecideIntentReply 的两条早返回安全路径,并同时断言返回结果以及在 NeedReply == false 时 limiter 不会被调用。
Original comment in English
suggestion (testing): Add coverage for DecideIntentReply when analysis is nil and when NeedReply is false.
The new Decider.DecideIntentReply has two untested early-return paths that are important for safety:
analysis == nilshould returnAllowed=false, reason"意图分析缺失", and the intended zero-valueTriggerType.analysis.NeedReply == falseshould short-circuit before any limiter calls and return aDecisionwhose reason clearly reflects that no reply is needed.
Please add a small table-driven test (or two focused tests) that covers these cases and asserts both the outputs and that the limiter is not consulted in the NeedReply == false path, to prevent regressions in these degenerate scenarios.
Suggested implementation:
func TestDeciderUsesConfigDrivenThresholdAndReplyMode(t *testing.T) {
s, rdb := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0.0,
CooldownBaseSeconds: 1.0,
MaxCooldownSeconds: 10.0,I can only see the beginning of TestDeciderUsesConfigDrivenThresholdAndReplyMode, so I’ll describe the additional tests that should be appended near the bottom of rate_limiter_test.go (after the existing decider tests, reusing the same helpers and types you already have in the file/package):
- Add a fake limiter that lets us assert it is not called:
type noopLimiter struct {
called bool
}
func (n *noopLimiter) Allow(ctx context.Context, chatID string, trigger TriggerType) (bool, string) {
n.called = true
return true, ""
}Adjust the type/method signature to match your existing Limiter interface if it differs.
- Test when
analysis == nilreturns the safety guard decision:
func TestDeciderIntentReply_NilAnalysis(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
limiter := NewSmartRateLimiter(&Config{
MaxMessagesPerHour: 100,
MaxMessagesPerDay: 100,
MinIntervalSeconds: 0,
CooldownBaseSeconds: 1,
MaxCooldownSeconds: 10,
})
decider := NewDecider(limiter /* plus any other dependencies you already pass here */)
decision := decider.DecideIntentReply(ctx, "chat-1", nil)
if decision.Allowed {
t.Fatalf("expected Allowed=false when analysis is nil, got true")
}
if decision.Reason != "意图分析缺失" {
t.Fatalf("unexpected reason: %q, want %q", decision.Reason, "意图分析缺失")
}
if decision.Trigger != TriggerType(0) { // or the exact zero-value constant if you have one
t.Fatalf("expected zero-value TriggerType, got %v", decision.Trigger)
}
}Adjust NewDecider arguments and the decision struct field names to match your actual types (e.g. Decision.TriggerType vs Decision.Trigger).
- Test when
analysis.NeedReply == falseshort-circuits and does not consult the limiter:
func TestDeciderIntentReply_NoNeedReplyDoesNotHitLimiter(t *testing.T) {
s, _ := setupTestRedis(t)
defer s.Close()
ctx := context.Background()
nl := &noopLimiter{}
decider := NewDecider(nl /* plus any other dependencies you already pass here */)
analysis := &IntentAnalysis{
NeedReply: false,
// fill any other required fields to produce a valid struct
}
decision := decider.DecideIntentReply(ctx, "chat-2", analysis)
if nl.called {
t.Fatalf("expected limiter not to be called when NeedReply=false")
}
if decision.Allowed {
t.Fatalf("expected Allowed=false when NeedReply=false, got true")
}
if !strings.Contains(decision.Reason, "无需回复") && !strings.Contains(decision.Reason, "不需要回复") {
t.Fatalf("unexpected reason: %q (should clearly indicate no reply is needed)", decision.Reason)
}
}- Imports / helpers:
- Ensure
stringsis imported at the top of the test file:
import (
"context"
"strings"
"testing"
// ...existing imports
)- If your
DecisionorIntentAnalysistypes live in another package (e.g.ratelimitorlark), qualify them appropriately (e.g.ratelimit.IntentAnalysis).
These additions will fully cover the two early-return safety paths for Decider.DecideIntentReply and assert both the outputs and that the limiter is not consulted when NeedReply == false.
| t.Fatalf("resolve config path: %v", err) | ||
| } | ||
| t.Setenv("BETAGO_CONFIG_PATH", configPath) | ||
| } |
There was a problem hiding this comment.
suggestion (testing): 考虑为 PromptIdentityLines 增加一个聚焦测试,用来锁定 identity prompt 的约定。
由于很多新的 prompt 测试(standard、agentic、continuation)都是通过更高层的构造函数间接地测试 PromptIdentityLines,增加一个直接的单元测试会有助于明确它的契约。例如,添加一个测试:
- 调用
PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"})。 - 断言返回的切片中包含预期的
self_open_id、self_app_id、self_name条目,以及关于 sender 的user_id/open_id和 mention 目标open_id的说明性文本行。
这样一来,如果 identity prompt 的格式发生改变,就会有一条单独、明确的测试定义预期行为。
建议实现:
func useWorkspaceConfigPath(t *testing.T) {
t.Helper()
configPath, err := filepath.Abs("../../../../.dev/config.toml")
if err != nil {
t.Fatalf("resolve config path: %v", err)
}
t.Setenv("BETAGO_CONFIG_PATH", configPath)
}
func TestPromptIdentityLines_BasicContract(t *testing.T) {
profile := Profile{
AppID: "cli_x",
BotOpenID: "ou_bot",
BotName: "BetaGo",
}
lines := PromptIdentityLines(profile)
assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})
// Explanatory line(s) about sender identifiers (user_id/open_id).
assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
// Explanatory line(s) about mention targets by open_id.
assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}
func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
t.Helper()
for _, line := range lines {
matches := true
for _, sub := range substrs {
if !strings.Contains(line, sub) {
matches = false
break
}
}
if matches {
return
}
}
t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}
func TestGetProfileCacheCachesByIdentity(t *testing.T) {- 确保在
internal/application/lark/botidentity/profile_test.go顶部的 import 块中加入"strings"。 - 如果该测试文件中已经有类似
assertLineContainsAll的通用断言 helper,建议复用它而不是新增一个,以保持风格一致。
Original comment in English
suggestion (testing): Consider adding a focused test for PromptIdentityLines to lock in the identity prompt contract.
Since many of the new prompt tests (standard, agentic, continuation) only exercise PromptIdentityLines indirectly through higher-level builders, a direct unit test would help pin down its contract. For example, add a test that:
- Calls
PromptIdentityLines(Profile{AppID: "cli_x", BotOpenID: "ou_bot", BotName: "BetaGo"}). - Asserts the slice includes the expected
self_open_id,self_app_id,self_nameentries and the explanatory lines about senderuser_id/open_idand mention targetopen_id.
That way, if the identity prompt format changes, there’s a single, explicit test defining the intended behavior.
Suggested implementation:
func useWorkspaceConfigPath(t *testing.T) {
t.Helper()
configPath, err := filepath.Abs("../../../../.dev/config.toml")
if err != nil {
t.Fatalf("resolve config path: %v", err)
}
t.Setenv("BETAGO_CONFIG_PATH", configPath)
}
func TestPromptIdentityLines_BasicContract(t *testing.T) {
profile := Profile{
AppID: "cli_x",
BotOpenID: "ou_bot",
BotName: "BetaGo",
}
lines := PromptIdentityLines(profile)
assertLineContainsAll(t, lines, []string{"self_open_id", "ou_bot"})
assertLineContainsAll(t, lines, []string{"self_app_id", "cli_x"})
assertLineContainsAll(t, lines, []string{"self_name", "BetaGo"})
// Explanatory line(s) about sender identifiers (user_id/open_id).
assertLineContainsAll(t, lines, []string{"sender", "user_id", "open_id"})
// Explanatory line(s) about mention targets by open_id.
assertLineContainsAll(t, lines, []string{"mention", "open_id"})
}
func assertLineContainsAll(t *testing.T, lines []string, substrs []string) {
t.Helper()
for _, line := range lines {
matches := true
for _, sub := range substrs {
if !strings.Contains(line, sub) {
matches = false
break
}
}
if matches {
return
}
}
t.Fatalf("no line in PromptIdentityLines output contained all substrings %v; lines:\n%s", substrs, strings.Join(lines, "\n"))
}
func TestGetProfileCacheCachesByIdentity(t *testing.T) {- Ensure
stringsis imported at the top ofinternal/application/lark/botidentity/profile_test.go, e.g. add"strings"to the existing import block. - If the test file already defines a generic assertion helper similar to
assertLineContainsAll, you may want to reuse that instead of introducing a new helper to keep style consistent.
✨ feat(history): 升级历史搜索能力
✨ feat(intent): 扩展意图识别维度
✨ feat(ratelimit): 优化智能频控策略
✨ feat(prompt): 统一优化全链路回复风格
♻️ refactor(chatflow): 重构标准对话流程
✅ test: 新增核心功能单元测试覆盖
Sourcery 总结
将机器人身份处理、历史搜索、意图驱动回复和限流机制统一到一套一致的提示词与工具流水线中,使机器人的自我身份显式化、提升目标化历史检索能力,并在标准会话和 Agent 式会话中统一对话行为与 @提及 的使用方式。
新功能:
search_history工具对外提供。缺陷修复:
改进与增强:
send_message,search_history),更清晰地记录提及用法和历史过滤器说明。测试:
Original summary in English
Summary by Sourcery
Unify bot identity handling, history search, intent-driven reply and rate limiting into a consistent prompt and tooling pipeline that makes the bot’s self-identity explicit, improves targeted history retrieval, and standardizes conversational behavior and @mention usage across standard and agentic chat flows.
New Features:
Bug Fixes:
Enhancements:
Tests: