Skip to content
This repository was archived by the owner on Jul 22, 2025. It is now read-only.

Commit b480f13

Browse files
authored
FIX: Prevent LLM enumerator from erroring when spam enabled (#1045)
This PR fixes an issue where LLM enumerator would error out when `SiteSetting.ai_spam_detection = true` but there was no `AiModerationSetting.spam` present. Typically, we add an `LlmDependencyValidator` for the setting itself, however, since Spam is unique in that it has it's model set in `AiModerationSetting` instead of a `SiteSetting`, we'll add a simple check here to prevent erroring out.
1 parent 47ecf86 commit b480f13

File tree

2 files changed

+22
-1
lines changed

2 files changed

+22
-1
lines changed

lib/configuration/llm_enumerator.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ def self.global_usage
3838
rval[model_id] << { type: :ai_embeddings_semantic_search }
3939
end
4040

41-
if SiteSetting.ai_spam_detection_enabled
41+
if SiteSetting.ai_spam_detection_enabled && AiModerationSetting.spam.present?
4242
model_id = AiModerationSetting.spam[:llm_model_id]
4343
rval[model_id] << { type: :ai_spam }
4444
end
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# frozen_string_literal: true
2+
3+
RSpec.describe DiscourseAi::Configuration::LlmEnumerator do
4+
fab!(:fake_model)
5+
6+
describe "#global_usage" do
7+
before do
8+
SiteSetting.ai_helper_model = "custom:#{fake_model.id}"
9+
SiteSetting.ai_helper_enabled = true
10+
end
11+
12+
it "returns a hash of Llm models in use globally" do
13+
expect(described_class.global_usage).to eq(fake_model.id => [{ type: :ai_helper }])
14+
end
15+
16+
it "doesn't error on spam when spam detection is enabled but moderation setting is missing" do
17+
SiteSetting.ai_spam_detection_enabled = true
18+
expect { described_class.global_usage }.not_to raise_error
19+
end
20+
end
21+
end

0 commit comments

Comments
 (0)