feat: add custom headers support and fix response/error handling#23
feat: add custom headers support and fix response/error handling#23Nayjest merged 27 commits intoNayjest:mainfrom
Conversation
- Add extra_headers config option for connection definitions
- Add header injection wrapper for LLM function calls
- Add security filtering for sensitive headers (Authorization, Host, etc.)
- Add comprehensive tests for header utilities and wrapper
Example usage:
[connections.openai]
api_type = "open_ai"
api_key = "env:OPENAI_API_KEY"
extra_headers = { X-Title = "MyApp", X-Custom = "value" }
The non-streaming response was incorrectly reconstructed as a string, causing 500 errors when provider returns proper JSON response. Now checks for dict/model and forwards as-is, with fallback to reconstructed format for backwards compatibility.
- Add error handling in wrapper to catch microcore parsing failures - Return 502 Bad Gateway when upstream returns non-JSON (e.g., HTML) - Provide clear error message instead of crashing with 500
|
Hey @Nayjest This PR addresses both #21 and #22. Testing: Automated:
Manual:
Changes are thoroughly tested. Please review when you have time. Thanks! |
|
@manascb1344 Thanks for the great contribution! I'm planning to review everything and include the necessary changes and tests in the next major release (v3), expected in February. Questions to investigate: -=[ 1 ]=- Theoretically, custom headers should work using the [connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1"
api_key = "env:OPENAI_API_KEY"
[connections.openai.default_args.extra_headers]
X-Title = "MyApp"
X-Custom-Header = "value"See https://github.com/Nayjest/ai-microcore/blob/main/microcore/configuration.py#L213 — That said, the tests, security checks, and logging improvements look very useful. -=[ 2 ]=- I will likely move some of these changes to the LLM API adapter level (microcore) as a more appropriate place for such generic functionality. |
|
-=[ 3 ]=- Moving |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Why not handle all exceptions same way like it was introduced for AttributeError, ValueError ? try:
out = await async_llm_func(request.messages, **llm_params)
log_entry.response = out
logging.info("LLM response: %s", out)
except (AttributeError, ValueError) as e:
# Handle case where upstream returns non-JSON response (e.g., HTML error page)
# microcore fails to parse and returns a string, causing AttributeError
# We catch this and return a proper error response
error_msg = str(e)
logging.error("Upstream provider error: %s", error_msg)
log_entry.error = e
await log_non_blocking(log_entry)
return JSONResponse(
{
"error": {
"message": f"Upstream provider returned invalid response: {error_msg}",
"type": "upstream_error",
}
},
status_code=502,
)
except Exception as e:
log_entry.error = e
await log_non_blocking(log_entry)
raise
await log_non_blocking(log_entry) |
|
We can do that! |
This comment has been minimized.
This comment has been minimized.
|
Why not handle all exceptions same way like it was introduced for AttributeError, ValueError ? try:
out = await async_llm_func(request.messages, **llm_params)
log_entry.response = out
logging.info("LLM response: %s", out)
except (AttributeError, ValueError) as e:
# Handle case where upstream returns non-JSON response (e.g., HTML error page)
# microcore fails to parse and returns a string, causing AttributeError
# We catch this and return a proper error response
error_msg = str(e)
logging.error("Upstream provider error: %s", error_msg)
log_entry.error = e
await log_non_blocking(log_entry)
return JSONResponse(
{
"error": {
"message": f"Upstream provider returned invalid response: {error_msg}",
"type": "upstream_error",
}
},
status_code=502,
)
except Exception as e:
log_entry.error = e
await log_non_blocking(log_entry)
raise
await log_non_blocking(log_entry) |
…d error details in debug mode - fix n>1 support for OpenAI API - removed outdated tests
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|

PR Description
Summary
This PR adds custom headers support for proxy connections and fixes multiple bugs in response and error handling that caused 500 Internal Server Errors.
Issues Addressed
Changes Made
Feature: Custom Headers Support
Added the ability to send custom HTTP headers to upstream LLM providers.
1. Header Utilities (
lm_proxy/utils.py)Added utility functions for header management:
SENSITIVE_HEADERS- Set of headers that should be blocked for securityfilter_sensitive_headers()- Removes sensitive headers from being overriddenmerge_headers()- Merges base headers with override headers (override takes precedence)2. Header Injection (
lm_proxy/bootstrap.py)create_llm_wrapper()function that wraps LLM calls to inject custom headersextra_headersparameter passed to the OpenAI SDK3. Usage
Bug Fixes: Response and Error Handling
Fixed multiple bugs in how responses and errors are handled.
Bug #1: Non-Streaming Response Reconstruction Failure
Problem: Non-streaming responses were incorrectly reconstructed using
str(out), causingAttributeError: 'str' object has no attribute 'choices'.Fix: Forward responses as-is by checking for
model_dump(),dict(), or rawdict:Bug #2: Invalid Upstream Responses Cause 500 Instead of 502
Problem: When upstream providers returned non-JSON responses (HTML, 503 errors), the proxy crashed with 500 Internal Server Error.
Fix: Catch upstream errors and return 502 Bad Gateway with clear error messages:
Bug #3: Poor Error Logging
Problem: Error logs didn't provide enough information for debugging.
Fix: Enhanced logging with descriptive messages:
Bug #4: Microcore dict() Serialization Edge Case
Problem: Microcore's
LLMResponse.dict()sometimes returns malformed data.Fix: Added try/catch with fallback:
Files Changed
lm_proxy/utils.pylm_proxy/bootstrap.pylm_proxy/core.pytests/test_extra_headers.pyTesting
Breaking Changes
None. All changes are backward-compatible.
Security
Sensitive headers are automatically blocked:
Authorization,WWW-AuthenticateContent-Length,Content-Type,Transfer-EncodingHost,Connection,Cache-ControlProxy-Authorization,Strict-Transport-Security:method,:path,:scheme,:authority)