[#3659] fix(sdk): correct Together AI env var name lookup in vault middleware#3745
[#3659] fix(sdk): correct Together AI env var name lookup in vault middleware#3745WassimAkkacha wants to merge 1 commit intoAgenta-AI:mainfrom
Conversation
|
@WassimAkkacha is attempting to deploy a commit to the agenta projects Team on Vercel. A member of the Team first needs to authorize it. |
mmabrouk
left a comment
There was a problem hiding this comment.
Thanks @WassimAkkacha for the PR!
Please see the comments
Please also don't forget to sign the CLA to be able to merge your PR
Thanks again for your contribution!
| for provider_kind in _PROVIDER_KINDS: | ||
| provider = provider_kind | ||
| key_name = f"{provider.upper()}_API_KEY" | ||
| key_name = _PROVIDER_ENV_VAR_OVERRIDES.get( |
There was a problem hiding this comment.
I would not do it like this. The readability is quite hard.
Instead I'd keep key_name as is and then override the key_name later (TOGETHER_AI_API_KEY-> TOGETHERAI_API_KEY).
But in any case, looking at this solution. I think it would be much better to instead do the fix from the frontend side to rename it that it fits litellm instead of adding complexity here.
There was a problem hiding this comment.
Thanks for the feedback on readability, happy to refactor that.
On the frontend rename, just want to make sure I understand correctly. Are you suggesting renaming the provider kind from together_ai to togetherai across the stack (StandardProviderKind, frontend maps, API enums) so the default upper() pattern works? Or something else? Want to make sure we're aligned before I rework the approach.
There was a problem hiding this comment.
Yes exactly, I was suggesting that the right solution is to not do this but instead align the name of the provider to litellms across the stack so that we don't have anywhere special logic for it.
There was a problem hiding this comment.
Got it, I'll rename "together_ai" to "togetherai" across the stack.
a3a1c81 to
9e7e6ca
Compare
9e7e6ca to
23314e5
Compare
| # Fix inconsistent API naming - normalize 'together_ai' to 'togetherai' | ||
| if data.get("kind", "") == "together_ai": | ||
| data["kind"] = "togetherai" | ||
|
|
||
| if not isinstance(data, dict): | ||
| raise ValueError( | ||
| "The provided request secret dto is not a valid type for StandardProviderDTO" |
There was a problem hiding this comment.
🟡 Normalization calls data.get() before verifying data is a dict in the PROVIDER_KEY branch
The new normalization code at line 76 calls data.get("kind", "") which assumes data is a dict, but the isinstance(data, dict) guard doesn't happen until line 79. If a caller passes a non-dict, non-BaseModel value for data (e.g. a string or list), the .get() call will raise an AttributeError instead of the intended clean ValueError from the isinstance check.
Root Cause and Impact
In mode="before" Pydantic validators, values haven't been type-checked yet so data could be any raw input. On line 69, data = values.get("data", {}) defaults to {}, and BaseModel instances are converted on lines 70-72, so data is usually a dict. However, if someone passes e.g. {"kind": "provider_key", "data": "not_a_dict"}, the code reaches:
if data.get("kind", "") == "together_ai": # AttributeError: 'str' has no attribute 'get'
data["kind"] = "togetherai"
if not isinstance(data, dict): # never reached
raise ValueError("...")The fix should move the normalization block after the isinstance(data, dict) check (or swap their order), so the validation error is raised cleanly. Note: the CUSTOM_PROVIDER branch at lines 93-97 has this same pre-existing issue, but for PROVIDER_KEY this ordering problem is newly introduced by this PR.
Impact: Malformed API requests get an unhandled AttributeError instead of a descriptive ValueError, resulting in a 500 Internal Server Error rather than a 422 Validation Error.
(Refers to lines 75-82)
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
Renamed provider kind from
together_aitotogetheraiacross the full stack (API, SDK, frontend) to align withLiteLLM's naming convention.
The vault middleware uses
f"{provider.upper()}_API_KEY"to build env var names — withtogetheraithis now correctly producesTOGETHERAI_API_KEY.Added backward-compat normalizer in both
PROVIDER_KEYandCUSTOM_PROVIDERbranches ofSecretDTOvalidator to handle any existing DB records with the oldtogether_aivalue.LiteLLM model strings (
together_ai/...) are unchanged — only the provider kind identifier was renamed.No override dict or special-case logic needed.
Closes Together AI model run fails with InvalidSecretsV0Error #3659
Note for maintainers
sdk/agenta/client/backend/types/standard_provider_kind.pyandcustom_provider_kind.pyare auto-generated by Fern, the Fern API definition should be updated to usetogetheraibefore the nextfern generaterun.Test plan
together_aiare normalized on read viaSecretDTOvalidator (bothPROVIDER_KEYandCUSTOM_PROVIDERpaths).