fix: prevent dummy_key from overriding provider-specific API keys#2293
Conversation
When OPENAI_API_KEY is not configured, litellm.api_key is set to 'dummy_key' as a fallback. However, this dummy key was being unconditionally passed to all providers via kwargs['api_key'], breaking Anthropic and other non-OpenAI models. Only pass api_key if it's a real key, not the dummy fallback. This allows LiteLLM to use the correct provider-specific key (litellm.anthropic_key, etc.).
Review Summary by QodoPrevent dummy_key from overriding provider-specific API keys
WalkthroughsDescription• Prevent dummy_key from overriding provider-specific API keys • Only pass api_key to kwargs if it's a real key, not fallback • Fixes authentication failures for Anthropic and other non-OpenAI providers • Allows LiteLLM to use correct provider-specific keys when generic key is unavailable Diagramflowchart LR
A["litellm.api_key = dummy_key<br/>fallback set"] --> B{"Is api_key real<br/>and not dummy?"}
B -->|Yes| C["Pass api_key<br/>to kwargs"]
B -->|No| D["Skip api_key<br/>in kwargs"]
C --> E["LiteLLM uses<br/>explicit api_key"]
D --> F["LiteLLM uses<br/>provider-specific key"]
F --> G["✓ Anthropic/Groq<br/>authentication works"]
File Changes1. pr_agent/algo/ai_handlers/litellm_ai_handler.py
|
Code Review by Qodo
1. Broken LiteLLM response mock
|
|
Reviewing - 1. Cross-provider key override |
Move hardcoded 'dummy_key' to module-level DUMMY_LITELLM_API_KEY constant to prevent maintenance errors from updating only one occurrence. Avoids silent breakage if sentinel value needs to be changed in future refactors.
|
Persistent review updated to latest commit 2452b6d |
|
Reviewed both with the help of Claude. Review 2 should be complete. Regarding Review 1: Investigated thoroughly and intentionally deferred. Here's why: The
That feels like an architectural refactor beyond the scope of a bug fix PR. The current fix (skip dummy_key) solves the most common real-world scenario: single-provider setups where the dummy key was breaking everything. The multi-provider key leak is a pre-existing issue that existed before this change. |
JiwaniZakir
left a comment
There was a problem hiding this comment.
The condition in chat_completion — if litellm.api_key and litellm.api_key != DUMMY_LITELLM_API_KEY — implicitly changes behavior beyond the stated fix: previously, kwargs["api_key"] = litellm.api_key was always executed, meaning api_key=None was passed when, for example, OpenAI credentials are configured via openai.key (which sets litellm.openai_key but leaves litellm.api_key at its default). The new guard silently omits api_key in that case too. That's likely correct, but it's a subtle behavior change that deserves an explicit comment and a test covering the OpenAI-key-set path.
The DUMMY_LITELLM_API_KEY constant is a good improvement over the magic string, but its purpose as a sentinel/placeholder value isn't obvious from the name alone — a brief inline comment like # placeholder to satisfy litellm when no real key is needed would clarify intent for future readers who encounter the != DUMMY_LITELLM_API_KEY comparison and wonder why equality against a specific string is meaningful here.
There's also no test covering the scenario this PR fixes: a provider-specific key (e.g., AWS or Azure) being silently overridden by the dummy value. Adding a test that verifies kwargs does not contain api_key when litellm.api_key equals DUMMY_LITELLM_API_KEY would lock in the correct behavior and prevent regression.
…-ai#2042) Add guard to prevent dummy OpenAI key from overriding provider-specific keys. Changes: 1. Extract DUMMY_LITELLM_API_KEY constant with clarifying comment 2. Add guard in chat_completion() to skip dummy/None keys 3. Add test_litellm_api_key_guard.py with 4 comprehensive tests: - test_dummy_key_not_forwarded: Verify placeholder blocked - test_none_api_key_not_forwarded: Verify None blocked (OpenAI path) - test_real_key_forwarded: Verify real provider keys injected - test_anthropic_key_not_shadowed_by_dummy_key: Replicate original bug Impact: Anthropic-only config now works. Real provider keys still forwarded. All tests passing (4 new + 287 existing).
|
Persistent review updated to latest commit b5db7ad |
|
Hey @JiwaniZakir, first of all, thank you for your time and comments! I have made the changes as recommended:
Created 4 tests covering all branches:
The guard respects LiteLLM's design pattern:
Summary:
|
…iers Replace sk-ant-test-real-key and sk-test-real-provider-key with generic identifiers (test-anthropic-key-12345, test-provider-key-67890) to avoid triggering secret scanning tools and policy violations. Tests still validate the guard correctly since they only require distinct, non-empty identifiers.
|
Persistent review updated to latest commit a454e4f |
|
Followed recommendations as well: The new unit test hardcodes API-key-looking strings (e.g., sk-ant-..., sk-...), which can trigger secret scanners and violates the no-committed-secrets requirement. Even if intended as fake |
Delete OPENAI_API_KEY env var and reset litellm.api_key before test to ensure the __init__ condition at line 42 is always met. Without this, tests fail when OPENAI_API_KEY is set in the developer's environment. Also adds clarifying comments about why these preconditions matter.
|
Persistent review updated to latest commit 1dc4223 |
|
The fix looks correct in principle — conditionally passing |
|
Followed recommendations: test_anthropic_key_not_shadowed_by_dummy_key asserts LiteLLMAIHandler.init sets litellm.api_key to the dummy placeholder, but init only does that when OPENAI_API_KEY is absent. If OPENAI_API_KEY is set in CI or a developer environment, this assertion fails and the test becomes flaky. |
|
@JiwaniZakir, not sure what you mean about his report. AFAIK, this fix is not merged yet, so the issue should be present in the latest released version. |
|
The report from @PeterDaveHello that v0.33 still exhibits the issue suggests the fix may not be handling all code paths — worth checking whether there are other places |
|
Hey @JiwaniZakir, apologies, I'm still confused. This fix is not in v0.33 or in the main branch yet, so I'm confused how you can clearly say that it's not fixing the issue that was raised in #2042 (or the bug I ran into). Regarding the report, could you please be kind enough to reference it here so that I can read it? Is it an issue, pr comment? Regarding the test tokens, it should already be done:
|
|
The secret scanner concern is valid — test strings matching |
|
@JiwaniZakir, again
|
|
The condition checking for |
|
Persistent review updated to latest commit 6a1c384 |
|
The guard condition approach makes sense here — using |
- Add test_groq_key_forwarded_for_non_ollama_model: Verifies Groq key is forwarded for non-Ollama models - Add test_xai_key_forwarded_for_non_ollama_model: Verifies xAI key is forwarded for non-Ollama models - Add test_ollama_and_groq_coexist: Verifies multiple provider keys coexist correctly These tests ensure the sentinel-value guard (DUMMY_LITELLM_API_KEY) properly handles all providers without breaking existing functionality. All 7 tests pass: 4 core guard tests + 3 regression tests.
|
Persistent review updated to latest commit 295f5c4 |
|
Hey @naorpeled , I saw that you approved #2288. Both PRs (this and that) are targeting the same bug, but with different scopes. In #2288's approach, it solves the immediate Ollama Cloud + Gemini conflict, thank you! But it has narrower coverage, and I'm concerned whether the fix might prevent keys like Groq, xAI, Azure AD, etc since they set the key during I ran a test on the latest The fix in this PR should work for all providers using I've updated my test cases as well. I would appreciate it if I could steal a bit of your time to go through this PR, thanks! |
|
Hey @yanukadeneth99, |
|
The guard condition approach is solid — using a truthiness check that rejects |
| def _mock_response(): | ||
| """Minimal acompletion response.""" | ||
| mock = MagicMock() | ||
| mock.__getitem__ = lambda self, key: { | ||
| "choices": [{"message": {"content": "ok"}, "finish_reason": "stop"}] | ||
| }[key] | ||
| mock.dict.return_value = {"choices": [{"message": {"content": "ok"}, "finish_reason": "stop"}]} | ||
| return mock |
There was a problem hiding this comment.
1. Broken litellm response mock 🐞 Bug ☼ Reliability
The new unit tests’ _mock_response() assigns a lambda to mock.__getitem__, but _get_completion() requires response["choices"] to return a real list and checks len(response["choices"]); this mock setup is likely not honored by response[...] and can cause openai.APIError during the tests. As written, the tests are unstable and may not actually validate the intended behavior.
Agent Prompt
### Issue description
The tests’ `_mock_response()` does not reliably satisfy the production access pattern `response["choices"]` + `len(response["choices"])` in `_get_completion()`. This can cause the tests to fail (or validate the wrong behavior) depending on mock internals.
### Issue Context
`LiteLLMAIHandler._get_completion()` expects an object that:
- supports `response["choices"]` returning a list
- supports `response.dict()` returning a dict-like structure for logging
### Fix Focus Areas
- tests/unittest/test_litellm_api_key_guard.py[37-44]
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[440-459]
### Implementation direction
Update `_mock_response()` to use a backing dict and configure magic methods in a standard way, e.g.:
- `data = {"choices": [{"message": {"content": "ok"}, "finish_reason": "stop"}]}`
- `mock = MagicMock()`
- `mock.__getitem__.side_effect = data.__getitem__`
- `mock.dict.return_value = data`
This ensures `response["choices"]` returns a real list and the handler’s logging path still works.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
|
The hardcoded key strings in the tests are intentional test fixtures, not real credentials — they follow the |
Summary
Issue: When using Anthropic without OpenAI configured, the placeholder
litellm.api_key(DUMMY_LITELLM_API_KEY) was being forwarded to litellm calls, preventing litellm from using the provider-specificlitellm.anthropic_keyinternally. This caused AuthenticationError: invalid x-api-key. Recommending this specific fix due to it catchingFalse, empty strings, or 0Fix: Added a guard condition in
chat_completion()to only forwardlitellm.api_keyif it's a real, valid key, rejecting both None and the placeholder sentinel value.Tests: Wrote four comprehensive test cases to verify the guard blocks dummy/None keys, allow real keys, and specifically replicate the Anthropic-without-OpenAI scenario (litellm.AuthenticationError: invalid x-api-key when using Anthropic #2292).
Issue
Fixes a bug where the LiteLLM fallback dummy key was being unconditionally passed to all AI providers, breaking authentication for non-OpenAI models (Anthropic, Groq, etc.).
References: #2042
The Problem
When
OPENAI_API_KEYis not configured:litellm.api_key = "dummy_key"as a fallbackkwargs["api_key"]Example Scenario
The Solution
Only pass api_key in kwargs if it's a real key, not the dummy fallback:
When api_key is not in kwargs, LiteLLM falls back to provider-specific keys (litellm.anthropic_key, litellm.groq_key, etc.), which is the correct behavior.
Impact Analysis
✅ Anthropic-only config - Now works (was broken)
✅ OpenAI-only config - Still works (unchanged)
✅ Azure OpenAI - Still works (uses azure_key)
✅ Groq/X-AI/Ollama - Still works (real keys passed)
✅ Azure AD tokens - Still works (real tokens passed)
✅ Bedrock - Unaffected (uses AWS env vars)
Testing
Comprehensive test suite executed to verify no regressions:
Test Environment: Python 3.13.11
Test Duration: 40.30 seconds
Result: 287 passed, 0 failed, 0 skipped
Root Cause
The bug exists in the initialization sequence:
litellm.api_key = "dummy_key"(fallback)litellm.anthropic_key(provider-specific)api_key="dummy_key"tokwargslitellm.anthropic_keyis set correctly, the explicitkwargs["api_key"]takes precedence in LiteLLM's parameter handlingThe fix respects LiteLLM's design: provider-specific attributes take precedence when the generic api_key is just a fallback placeholder.