-
Notifications
You must be signed in to change notification settings - Fork 9
Fix: Correctly use context model for moderation #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the OpenAI moderation client handling logic to support third-party OpenAI-compatible providers. The key change is replacing the complex client reuse logic with a simpler approach that attempts to use the context client first and falls back to the default OpenAI client if the moderation endpoint returns a 404 error.
- Simplified client selection logic by removing URL-based filtering
- Added fallback mechanism for third-party providers that don't support moderation endpoints
- Extracted API call logic into a dedicated helper function
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| src/checks/moderation.ts | Refactored client handling to use context client with 404-based fallback, added callModerationAPI helper function |
| src/tests/unit/checks/moderation-secret-keys.test.ts | Added tests for context client usage and third-party provider fallback behavior |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const contextCreateMock = vi.fn().mockImplementation(async () => { | ||
| contextClientUsed = true; |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The contextClientUsed flag is unnecessary. You can check if the mock was called using expect(contextCreateMock).toHaveBeenCalled() instead of tracking this manually.
| createMock.mockImplementation(async () => { | ||
| fallbackUsed = true; |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The fallbackUsed flag is unnecessary. You can verify fallback behavior by checking that createMock was called, which already indicates the fallback client was used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Get client from context if available | ||
| let client: OpenAI | null = null; | ||
| if (ctx) { | ||
| const contextObj = ctx as Record<string, unknown>; | ||
| const candidate = contextObj.guardrailLlm; | ||
| if (candidate && candidate instanceof OpenAI) { | ||
| client = candidate; | ||
| } | ||
| }; | ||
|
|
||
| const client = reuseClientIfOpenAI(ctx) ?? new OpenAI(); | ||
| } |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The removal of base URL validation means the code will now attempt to use any OpenAI client from context, regardless of whether it points to the official API. While the 404 fallback handles providers without moderation support, this could lead to unnecessary API calls and latency when using third-party providers. Consider documenting this behavioral change or adding a configuration option to skip the initial attempt for known incompatible providers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (isNotFoundError(error)) { | ||
| try { | ||
| resp = await callModerationAPI(new OpenAI(), data); |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A new OpenAI client is instantiated on every fallback. If multiple moderation checks occur with third-party providers, this creates unnecessary client instances. Consider caching the fallback client or instantiating it once at the module level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only one moderation check will be called
| const errorMessage = fallbackError instanceof Error | ||
| ? fallbackError.message | ||
| : String(fallbackError); |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message extraction logic is duplicated here and in the outer catch block (line 222). Consider extracting this into a helper function to reduce code duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit]
| // Check if it's an API key error | ||
| if (errorMessage.includes('api_key') || errorMessage.includes('OPENAI_API_KEY')) { |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
String-based error detection using includes is fragile and may produce false positives or miss variations in error messages. Consider checking for specific error types or status codes if the OpenAI SDK provides them.
| // Check if it's an API key error | |
| if (errorMessage.includes('api_key') || errorMessage.includes('OPENAI_API_KEY')) { | |
| // Check if it's an API key error (HTTP 401 Unauthorized) | |
| if (typeof fallbackError === 'object' && fallbackError !== null && 'status' in fallbackError && fallbackError.status === 401) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay with this. It is still throwing an appropriate error regardless, this check just adds more useful information for the user.
gabor-openai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TY!
Fixes this reported BUG:
apiKeypassed in via client initialization is not properly passed tomoderationcheckThe fix:
guardrail_llm, if the moderation endpoint does not exist with that client, fall back to creating a compatible OpenAI client viaenv variableOpenAI API keyis required for the Moderation checkFuture PR to allow configuring an
api_keyspecific to the moderation check via the config file (will only be needed for users who are using non-openai models as their base client).