Skip to content

Conversation

@Soulter
Copy link
Member

@Soulter Soulter commented Nov 27, 2025

  • Introduced a reload lock to prevent concurrent reloads of providers.
  • Added logging to indicate when a provider is disabled and when providers are being synchronized with the configuration.
  • Refactored the reload method to improve clarity and maintainability.

relates: #2817

Modifications / 改动点

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果


Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

Summary by Sourcery

在防止并发重新加载并提升可观测性的同时,使 provider 实例与最新配置保持同步。

增强点:

  • 使用异步锁保护 provider 的重新加载,避免并发重新加载冲突。
  • 在重新加载过程中,使内存中的 provider 配置和实例与当前 astrbot 配置保持同步。
  • 改进与禁用的 providers、来自配置的 provider ID,以及针对不同 provider 类型的自动 provider 选择相关的日志记录。
Original summary in English

Summary by Sourcery

Synchronize provider instances with the latest configuration while preventing concurrent reloads and improving observability.

Enhancements:

  • Guard provider reloads with an asynchronous lock to avoid concurrent reload conflicts.
  • Keep the in-memory provider configuration and instances in sync with the current astrbot configuration during reloads.
  • Improve logging around disabled providers, configuration-sourced provider IDs, and automatic provider selection for different provider types.

- Introduced a reload lock to prevent concurrent reloads of providers.
- Added logging to indicate when a provider is disabled and when providers are being synchronized with the configuration.
- Refactored the reload method to improve clarity and maintainability.
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

你好,我已经审查了你的修改,这里是一些反馈:

  • reload 中,self.providers_config 现在是从 astrbot_config["provider"] 重新赋值的,但在该作用域中并没有定义 astrbot_config,而且它可能会与 __init__ 中使用的、基于 acm.confs 的配置产生偏差;建议统一配置来源(例如通过 self.acm 或现有的 self.providers_config)以避免运行时错误和配置漂移。
  • 新增的 reload_lock 可以防止每个管理实例上的并发 reload 执行,但如果多个 ProviderManager 实例可以同时存在,你可能需要确认是否需要一个进程级/共享锁来避免在共享资源(例如 inst_map 或全局配置)上的跨实例竞争条件。
  • load_provider 中为禁用的 provider 新增的 logger.info 日志,在存在大量被禁用 provider 的场景下可能会非常噪声;如果这是一个会频繁发生的事件,建议改为使用 logger.debug,或者在日志中添加足够多的上下文信息(例如 provider 类型或禁用原因)。
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `reload`, `self.providers_config` is now reassigned from `astrbot_config["provider"]`, but `astrbot_config` is not defined in this scope and may diverge from the `acm.confs`-backed configuration used in `__init__`; consider sourcing the config consistently (e.g., via `self.acm` or existing `self.providers_config`) to avoid runtime errors and config drift.
- The new `reload_lock` prevents concurrent `reload` executions per manager instance, but if multiple `ProviderManager` instances can exist concurrently, you may want to confirm whether a per-process/shared lock is required to prevent cross-instance races on shared resources like `inst_map` or global config.
- The new `logger.info` message for disabled providers in `load_provider` may be quite noisy in setups with many disabled providers; consider using `logger.debug` or adding enough context (e.g., provider type or reason) if this is expected to be a frequent event.

## Individual Comments

### Comment 1
<location> `astrbot/core/provider/manager.py:228` </location>
<code_context>
    async def load_provider(self, provider_config: dict):
        if not provider_config["enable"]:
            logger.info(f"Provider {provider_config['id']} is disabled, skipping")
            return
        if provider_config.get("provider_type", "") == "agent_runner":
            return

        logger.info(
            f"载入 {provider_config['type']}({provider_config['id']}) 服务提供商 ...",
        )

        # 动态导入
        try:
            match provider_config["type"]:
                case "openai_chat_completion":
                    from .sources.openai_source import (
                        ProviderOpenAIOfficial as ProviderOpenAIOfficial,
                    )
                case "zhipu_chat_completion":
                    from .sources.zhipu_source import ProviderZhipu as ProviderZhipu
                case "groq_chat_completion":
                    from .sources.groq_source import ProviderGroq as ProviderGroq
                case "anthropic_chat_completion":
                    from .sources.anthropic_source import (
                        ProviderAnthropic as ProviderAnthropic,
                    )
                case "googlegenai_chat_completion":
                    from .sources.gemini_source import (
                        ProviderGoogleGenAI as ProviderGoogleGenAI,
                    )
                case "sensevoice_stt_selfhost":
                    from .sources.sensevoice_selfhosted_source import (
                        ProviderSenseVoiceSTTSelfHost as ProviderSenseVoiceSTTSelfHost,
                    )
                case "openai_whisper_api":
                    from .sources.whisper_api_source import (
                        ProviderOpenAIWhisperAPI as ProviderOpenAIWhisperAPI,
                    )
                case "openai_whisper_selfhost":
                    from .sources.whisper_selfhosted_source import (
                        ProviderOpenAIWhisperSelfHost as ProviderOpenAIWhisperSelfHost,
                    )
                case "xinference_stt":
                    from .sources.xinference_stt_provider import (
                        ProviderXinferenceSTT as ProviderXinferenceSTT,
                    )
                case "openai_tts_api":
                    from .sources.openai_tts_api_source import (
                        ProviderOpenAITTSAPI as ProviderOpenAITTSAPI,
                    )
                case "edge_tts":
                    from .sources.edge_tts_source import (
                        ProviderEdgeTTS as ProviderEdgeTTS,
                    )
                case "gsv_tts_selfhost":
                    from .sources.gsv_selfhosted_source import (
                        ProviderGSVTTS as ProviderGSVTTS,
                    )
                case "gsvi_tts_api":
                    from .sources.gsvi_tts_source import (
                        ProviderGSVITTS as ProviderGSVITTS,
                    )
                case "fishaudio_tts_api":
                    from .sources.fishaudio_tts_api_source import (
                        ProviderFishAudioTTSAPI as ProviderFishAudioTTSAPI,
                    )
                case "dashscope_tts":
                    from .sources.dashscope_tts import (
                        ProviderDashscopeTTSAPI as ProviderDashscopeTTSAPI,
                    )
                case "azure_tts":
                    from .sources.azure_tts_source import (
                        AzureTTSProvider as AzureTTSProvider,
                    )
                case "minimax_tts_api":
                    from .sources.minimax_tts_api_source import (
                        ProviderMiniMaxTTSAPI as ProviderMiniMaxTTSAPI,
                    )
                case "volcengine_tts":
                    from .sources.volcengine_tts import (
                        ProviderVolcengineTTS as ProviderVolcengineTTS,
                    )
                case "gemini_tts":
                    from .sources.gemini_tts_source import (
                        ProviderGeminiTTSAPI as ProviderGeminiTTSAPI,
                    )
                case "openai_embedding":
                    from .sources.openai_embedding_source import (
                        OpenAIEmbeddingProvider as OpenAIEmbeddingProvider,
                    )
                case "gemini_embedding":
                    from .sources.gemini_embedding_source import (
                        GeminiEmbeddingProvider as GeminiEmbeddingProvider,
                    )
                case "vllm_rerank":
                    from .sources.vllm_rerank_source import (
                        VLLMRerankProvider as VLLMRerankProvider,
                    )
                case "xinference_rerank":
                    from .sources.xinference_rerank_source import (
                        XinferenceRerankProvider as XinferenceRerankProvider,
                    )
                case "bailian_rerank":
                    from .sources.bailian_rerank_source import (
                        BailianRerankProvider as BailianRerankProvider,
                    )
        except (ImportError, ModuleNotFoundError) as e:
            logger.critical(
                f"加载 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}。可能是因为有未安装的依赖。",
            )
            return
        except Exception as e:
            logger.critical(
                f"加载 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}。未知原因",
            )
            return

        if provider_config["type"] not in provider_cls_map:
            logger.error(
                f"未找到适用于 {provider_config['type']}({provider_config['id']}) 的提供商适配器,请检查是否已经安装或者名称填写错误。已跳过。",
            )
            return

        provider_metadata = provider_cls_map[provider_config["type"]]
        try:
            # 按任务实例化提供商
            cls_type = provider_metadata.cls_type
            if not cls_type:
                logger.error(f"无法找到 {provider_metadata.type} 的类")
                return

            provider_metadata.id = provider_config["id"]

            if provider_metadata.provider_type == ProviderType.SPEECH_TO_TEXT:
                # STT 任务
                inst = cls_type(provider_config, self.provider_settings)

                if getattr(inst, "initialize", None):
                    await inst.initialize()

                self.stt_provider_insts.append(inst)
                if (
                    self.provider_stt_settings.get("provider_id")
                    == provider_config["id"]
                ):
                    self.curr_stt_provider_inst = inst
                    logger.info(
                        f"已选择 {provider_config['type']}({provider_config['id']}) 作为当前语音转文本提供商适配器。",
                    )
                if not self.curr_stt_provider_inst:
                    self.curr_stt_provider_inst = inst

            elif provider_metadata.provider_type == ProviderType.TEXT_TO_SPEECH:
                # TTS 任务
                inst = cls_type(provider_config, self.provider_settings)

                if getattr(inst, "initialize", None):
                    await inst.initialize()

                self.tts_provider_insts.append(inst)
                if self.provider_settings.get("provider_id") == provider_config["id"]:
                    self.curr_tts_provider_inst = inst
                    logger.info(
                        f"已选择 {provider_config['type']}({provider_config['id']}) 作为当前文本转语音提供商适配器。",
                    )
                if not self.curr_tts_provider_inst:
                    self.curr_tts_provider_inst = inst

            elif provider_metadata.provider_type == ProviderType.CHAT_COMPLETION:
                # 文本生成任务
                inst = cls_type(
                    provider_config,
                    self.provider_settings,
                )

                if getattr(inst, "initialize", None):
                    await inst.initialize()

                self.provider_insts.append(inst)
                if (
                    self.provider_settings.get("default_provider_id")
                    == provider_config["id"]
                ):
                    self.curr_provider_inst = inst
                    logger.info(
                        f"已选择 {provider_config['type']}({provider_config['id']}) 作为当前提供商适配器。",
                    )
                if not self.curr_provider_inst:
                    self.curr_provider_inst = inst

            elif provider_metadata.provider_type == ProviderType.EMBEDDING:
                inst = cls_type(provider_config, self.provider_settings)
                if getattr(inst, "initialize", None):
                    await inst.initialize()
                self.embedding_provider_insts.append(inst)
            elif provider_metadata.provider_type == ProviderType.RERANK:
                inst = cls_type(provider_config, self.provider_settings)
                if getattr(inst, "initialize", None):
                    await inst.initialize()
                self.rerank_provider_insts.append(inst)

            self.inst_map[provider_config["id"]] = inst
        except Exception as e:
            logger.error(
                f"实例化 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}",
            )
            raise Exception(
                f"实例化 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}",
            )

</code_context>

<issue_to_address>
**issue (code-quality):** We've found these issues:

- Remove redundant exceptions from an except clause ([`remove-redundant-exception`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/remove-redundant-exception/))
- Replace length-one exception tuple with exception ([`simplify-single-exception-tuple`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/simplify-single-exception-tuple/))
- Explicitly raise from a previous error ([`raise-from-previous-error`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/raise-from-previous-error/))
- Low code quality found in ProviderManager.load\_provider - 13% ([`low-code-quality`](https://docs.sourcery.ai/Reference/Default-Rules/comments/low-code-quality/))

<br/><details><summary>Explanation</summary>


The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

- Reduce the function length by extracting pieces of functionality out into
  their own functions. This is the most important thing you can do - ideally a
  function should be less than 10 lines.
- Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
  sits together within the function rather than being scattered.</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Original comment in English

Hey there - I've reviewed your changes - here's some feedback:

  • In reload, self.providers_config is now reassigned from astrbot_config["provider"], but astrbot_config is not defined in this scope and may diverge from the acm.confs-backed configuration used in __init__; consider sourcing the config consistently (e.g., via self.acm or existing self.providers_config) to avoid runtime errors and config drift.
  • The new reload_lock prevents concurrent reload executions per manager instance, but if multiple ProviderManager instances can exist concurrently, you may want to confirm whether a per-process/shared lock is required to prevent cross-instance races on shared resources like inst_map or global config.
  • The new logger.info message for disabled providers in load_provider may be quite noisy in setups with many disabled providers; consider using logger.debug or adding enough context (e.g., provider type or reason) if this is expected to be a frequent event.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `reload`, `self.providers_config` is now reassigned from `astrbot_config["provider"]`, but `astrbot_config` is not defined in this scope and may diverge from the `acm.confs`-backed configuration used in `__init__`; consider sourcing the config consistently (e.g., via `self.acm` or existing `self.providers_config`) to avoid runtime errors and config drift.
- The new `reload_lock` prevents concurrent `reload` executions per manager instance, but if multiple `ProviderManager` instances can exist concurrently, you may want to confirm whether a per-process/shared lock is required to prevent cross-instance races on shared resources like `inst_map` or global config.
- The new `logger.info` message for disabled providers in `load_provider` may be quite noisy in setups with many disabled providers; consider using `logger.debug` or adding enough context (e.g., provider type or reason) if this is expected to be a frequent event.

## Individual Comments

### Comment 1
<location> `astrbot/core/provider/manager.py:228` </location>
<code_context>
    async def load_provider(self, provider_config: dict):
        if not provider_config["enable"]:
            logger.info(f"Provider {provider_config['id']} is disabled, skipping")
            return
        if provider_config.get("provider_type", "") == "agent_runner":
            return

        logger.info(
            f"载入 {provider_config['type']}({provider_config['id']}) 服务提供商 ...",
        )

        # 动态导入
        try:
            match provider_config["type"]:
                case "openai_chat_completion":
                    from .sources.openai_source import (
                        ProviderOpenAIOfficial as ProviderOpenAIOfficial,
                    )
                case "zhipu_chat_completion":
                    from .sources.zhipu_source import ProviderZhipu as ProviderZhipu
                case "groq_chat_completion":
                    from .sources.groq_source import ProviderGroq as ProviderGroq
                case "anthropic_chat_completion":
                    from .sources.anthropic_source import (
                        ProviderAnthropic as ProviderAnthropic,
                    )
                case "googlegenai_chat_completion":
                    from .sources.gemini_source import (
                        ProviderGoogleGenAI as ProviderGoogleGenAI,
                    )
                case "sensevoice_stt_selfhost":
                    from .sources.sensevoice_selfhosted_source import (
                        ProviderSenseVoiceSTTSelfHost as ProviderSenseVoiceSTTSelfHost,
                    )
                case "openai_whisper_api":
                    from .sources.whisper_api_source import (
                        ProviderOpenAIWhisperAPI as ProviderOpenAIWhisperAPI,
                    )
                case "openai_whisper_selfhost":
                    from .sources.whisper_selfhosted_source import (
                        ProviderOpenAIWhisperSelfHost as ProviderOpenAIWhisperSelfHost,
                    )
                case "xinference_stt":
                    from .sources.xinference_stt_provider import (
                        ProviderXinferenceSTT as ProviderXinferenceSTT,
                    )
                case "openai_tts_api":
                    from .sources.openai_tts_api_source import (
                        ProviderOpenAITTSAPI as ProviderOpenAITTSAPI,
                    )
                case "edge_tts":
                    from .sources.edge_tts_source import (
                        ProviderEdgeTTS as ProviderEdgeTTS,
                    )
                case "gsv_tts_selfhost":
                    from .sources.gsv_selfhosted_source import (
                        ProviderGSVTTS as ProviderGSVTTS,
                    )
                case "gsvi_tts_api":
                    from .sources.gsvi_tts_source import (
                        ProviderGSVITTS as ProviderGSVITTS,
                    )
                case "fishaudio_tts_api":
                    from .sources.fishaudio_tts_api_source import (
                        ProviderFishAudioTTSAPI as ProviderFishAudioTTSAPI,
                    )
                case "dashscope_tts":
                    from .sources.dashscope_tts import (
                        ProviderDashscopeTTSAPI as ProviderDashscopeTTSAPI,
                    )
                case "azure_tts":
                    from .sources.azure_tts_source import (
                        AzureTTSProvider as AzureTTSProvider,
                    )
                case "minimax_tts_api":
                    from .sources.minimax_tts_api_source import (
                        ProviderMiniMaxTTSAPI as ProviderMiniMaxTTSAPI,
                    )
                case "volcengine_tts":
                    from .sources.volcengine_tts import (
                        ProviderVolcengineTTS as ProviderVolcengineTTS,
                    )
                case "gemini_tts":
                    from .sources.gemini_tts_source import (
                        ProviderGeminiTTSAPI as ProviderGeminiTTSAPI,
                    )
                case "openai_embedding":
                    from .sources.openai_embedding_source import (
                        OpenAIEmbeddingProvider as OpenAIEmbeddingProvider,
                    )
                case "gemini_embedding":
                    from .sources.gemini_embedding_source import (
                        GeminiEmbeddingProvider as GeminiEmbeddingProvider,
                    )
                case "vllm_rerank":
                    from .sources.vllm_rerank_source import (
                        VLLMRerankProvider as VLLMRerankProvider,
                    )
                case "xinference_rerank":
                    from .sources.xinference_rerank_source import (
                        XinferenceRerankProvider as XinferenceRerankProvider,
                    )
                case "bailian_rerank":
                    from .sources.bailian_rerank_source import (
                        BailianRerankProvider as BailianRerankProvider,
                    )
        except (ImportError, ModuleNotFoundError) as e:
            logger.critical(
                f"加载 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}。可能是因为有未安装的依赖。",
            )
            return
        except Exception as e:
            logger.critical(
                f"加载 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}。未知原因",
            )
            return

        if provider_config["type"] not in provider_cls_map:
            logger.error(
                f"未找到适用于 {provider_config['type']}({provider_config['id']}) 的提供商适配器,请检查是否已经安装或者名称填写错误。已跳过。",
            )
            return

        provider_metadata = provider_cls_map[provider_config["type"]]
        try:
            # 按任务实例化提供商
            cls_type = provider_metadata.cls_type
            if not cls_type:
                logger.error(f"无法找到 {provider_metadata.type} 的类")
                return

            provider_metadata.id = provider_config["id"]

            if provider_metadata.provider_type == ProviderType.SPEECH_TO_TEXT:
                # STT 任务
                inst = cls_type(provider_config, self.provider_settings)

                if getattr(inst, "initialize", None):
                    await inst.initialize()

                self.stt_provider_insts.append(inst)
                if (
                    self.provider_stt_settings.get("provider_id")
                    == provider_config["id"]
                ):
                    self.curr_stt_provider_inst = inst
                    logger.info(
                        f"已选择 {provider_config['type']}({provider_config['id']}) 作为当前语音转文本提供商适配器。",
                    )
                if not self.curr_stt_provider_inst:
                    self.curr_stt_provider_inst = inst

            elif provider_metadata.provider_type == ProviderType.TEXT_TO_SPEECH:
                # TTS 任务
                inst = cls_type(provider_config, self.provider_settings)

                if getattr(inst, "initialize", None):
                    await inst.initialize()

                self.tts_provider_insts.append(inst)
                if self.provider_settings.get("provider_id") == provider_config["id"]:
                    self.curr_tts_provider_inst = inst
                    logger.info(
                        f"已选择 {provider_config['type']}({provider_config['id']}) 作为当前文本转语音提供商适配器。",
                    )
                if not self.curr_tts_provider_inst:
                    self.curr_tts_provider_inst = inst

            elif provider_metadata.provider_type == ProviderType.CHAT_COMPLETION:
                # 文本生成任务
                inst = cls_type(
                    provider_config,
                    self.provider_settings,
                )

                if getattr(inst, "initialize", None):
                    await inst.initialize()

                self.provider_insts.append(inst)
                if (
                    self.provider_settings.get("default_provider_id")
                    == provider_config["id"]
                ):
                    self.curr_provider_inst = inst
                    logger.info(
                        f"已选择 {provider_config['type']}({provider_config['id']}) 作为当前提供商适配器。",
                    )
                if not self.curr_provider_inst:
                    self.curr_provider_inst = inst

            elif provider_metadata.provider_type == ProviderType.EMBEDDING:
                inst = cls_type(provider_config, self.provider_settings)
                if getattr(inst, "initialize", None):
                    await inst.initialize()
                self.embedding_provider_insts.append(inst)
            elif provider_metadata.provider_type == ProviderType.RERANK:
                inst = cls_type(provider_config, self.provider_settings)
                if getattr(inst, "initialize", None):
                    await inst.initialize()
                self.rerank_provider_insts.append(inst)

            self.inst_map[provider_config["id"]] = inst
        except Exception as e:
            logger.error(
                f"实例化 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}",
            )
            raise Exception(
                f"实例化 {provider_config['type']}({provider_config['id']}) 提供商适配器失败:{e}",
            )

</code_context>

<issue_to_address>
**issue (code-quality):** We've found these issues:

- Remove redundant exceptions from an except clause ([`remove-redundant-exception`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/remove-redundant-exception/))
- Replace length-one exception tuple with exception ([`simplify-single-exception-tuple`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/simplify-single-exception-tuple/))
- Explicitly raise from a previous error ([`raise-from-previous-error`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/raise-from-previous-error/))
- Low code quality found in ProviderManager.load\_provider - 13% ([`low-code-quality`](https://docs.sourcery.ai/Reference/Default-Rules/comments/low-code-quality/))

<br/><details><summary>Explanation</summary>


The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

- Reduce the function length by extracting pieces of functionality out into
  their own functions. This is the most important thing you can do - ideally a
  function should be less than 10 lines.
- Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
  sits together within the function rather than being scattered.</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

# 初始化 MCP Client 连接
asyncio.create_task(self.llm_tools.init_mcp_clients(), name="init_mcp_clients")

async def load_provider(self, provider_config: dict):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): We've found these issues:


Explanation

The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

  • Reduce the function length by extracting pieces of functionality out into
    their own functions. This is the most important thing you can do - ideally a
    function should be less than 10 lines.
  • Reduce nesting, perhaps by introducing guard clauses to return early.
  • Ensure that variables are tightly scoped, so that code using related concepts
    sits together within the function rather than being scattered.
Original comment in English

issue (code-quality): We've found these issues:


Explanation

The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

  • Reduce the function length by extracting pieces of functionality out into
    their own functions. This is the most important thing you can do - ideally a
    function should be less than 10 lines.
  • Reduce nesting, perhaps by introducing guard clauses to return early.
  • Ensure that variables are tightly scoped, so that code using related concepts
    sits together within the function rather than being scattered.

@Soulter Soulter merged commit ba39c39 into master Nov 27, 2025
5 checks passed
@Soulter Soulter deleted the refactor-2817 branch December 5, 2025 07:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants