Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 20 additions & 1 deletion packages/astrbot/process_llm_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,18 @@ async def _request_img_caption(
f"Cannot get image caption because provider `{provider_id}` is not exist.",
)

def _select_provider(self, event: AstrMessageEvent):
"""选择使用的 LLM 提供商"""
sel_provider = event.get_extra("selected_provider")
_ctx = self.ctx
if sel_provider and isinstance(sel_provider, str):
provider = _ctx.get_provider_by_id(sel_provider)
if not provider:
logger.error(f"未找到指定的提供商: {sel_provider}。")
return provider
Comment on lines +118 to +126
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): 当选定的 provider ID 无效时,_select_provider 返回 None 而不是回退到默认 provider,这可能会导致下游错误。

sel_provider 分支中,如果 get_provider_by_id 返回 None,你会记录一条错误日志但仍然返回 None,这会在 process_llm_request 中访问 .provider_config 时引发异常。相应地,你可以在选定 provider 无效时回退到 _ctx.get_using_provider(umo=event.unified_msg_origin),或者在这里抛出一个明确的错误并在调用方进行处理。这样可以避免当前这种“静默失败”的模式以及意料之外的 None 传递到下游。

Original comment in English

issue (bug_risk): When a selected provider ID is invalid, _select_provider returns None instead of falling back to the default provider, which can cause downstream errors.

In the sel_provider branch, if get_provider_by_id returns None you log an error but still return None, which will cause an exception in process_llm_request when .provider_config is accessed. Instead, either fall back to _ctx.get_using_provider(umo=event.unified_msg_origin) when the selected provider is invalid, or raise an explicit error here and handle it at the call site. This avoids the current silent failure mode and unexpected None downstream.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

思考。
这个复制自astrbot\core\pipeline\process_stage\method\agent_sub_stages\internal.py
要改建议两个一起改(


return _ctx.get_using_provider(umo=event.unified_msg_origin)

async def process_llm_request(self, event: AstrMessageEvent, req: ProviderRequest):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): 我们发现了以下问题:

  • 建议使用命名表达式来简化赋值与条件判断 [×2](use-named-expression
  • 建议使用内置函数 next 来替代 for 循环(use-next
  • ProcessLLMRequest.process_llm_request 中检测到较低的代码质量——12%(low-code-quality


Explanation

该函数的质量评分低于 25% 的质量阈值。
这个分数是由方法长度、认知复杂度以及工作记忆占用等因素综合计算得出。

如何改进?

你可以考虑对这个函数进行重构,使其更短、更易读。

  • 通过把部分功能提取到独立函数中来缩短函数长度。这是你能做的最重要的事情——理想情况下,一个函数应该少于 10 行。
  • 通过引入“守卫式条件(guard clauses)”等方式提前返回,减少嵌套层级。
  • 确保变量作用域尽量小,让使用相关概念的代码尽量聚合在一起,而不是分散在函数的不同位置。
Original comment in English

issue (code-quality): We've found these issues:

  • Use named expression to simplify assignment and conditional [×2] (use-named-expression)
  • Use the built-in function next instead of a for-loop (use-next)
  • Low code quality found in ProcessLLMRequest.process_llm_request - 12% (low-code-quality)


Explanation

The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

  • Reduce the function length by extracting pieces of functionality out into
    their own functions. This is the most important thing you can do - ideally a
    function should be less than 10 lines.
  • Reduce nesting, perhaps by introducing guard clauses to return early.
  • Ensure that variables are tightly scoped, so that code using related concepts
    sits together within the function rather than being scattered.

"""在请求 LLM 前注入人格信息、Identifier、时间、回复内容等 System Prompt"""
cfg: dict = self.ctx.get_config(umo=event.unified_msg_origin)[
Expand Down Expand Up @@ -165,7 +177,14 @@ async def process_llm_request(self, event: AstrMessageEvent, req: ProviderReques
await self._ensure_persona(req, cfg, event.unified_msg_origin)

# image caption
if img_cap_prov_id and req.image_urls:
if (
img_cap_prov_id
and req.image_urls
and "image"
not in self._select_provider(event).provider_config.get(
"modalities", ["image"]
)
):
await self._ensure_img_caption(req, cfg, img_cap_prov_id)

# quote message processing
Expand Down