Skip to content

[#3659] fix(sdk): correct Together AI env var name lookup in vault middleware#3745

Open
WassimAkkacha wants to merge 1 commit intoAgenta-AI:mainfrom
WassimAkkacha:fix/together-ai-env-var-lookup
Open

[#3659] fix(sdk): correct Together AI env var name lookup in vault middleware#3745
WassimAkkacha wants to merge 1 commit intoAgenta-AI:mainfrom
WassimAkkacha:fix/together-ai-env-var-lookup

Conversation

@WassimAkkacha
Copy link

@WassimAkkacha WassimAkkacha commented Feb 12, 2026

Summary

  • Renamed provider kind from together_ai to togetherai across the full stack (API, SDK, frontend) to align with
    LiteLLM's naming convention.

  • The vault middleware uses f"{provider.upper()}_API_KEY" to build env var names — with togetherai this now correctly produces TOGETHERAI_API_KEY.

  • Added backward-compat normalizer in both PROVIDER_KEY and CUSTOM_PROVIDER branches of SecretDTO validator to handle any existing DB records with the old together_ai value.

  • LiteLLM model strings (together_ai/...) are unchanged — only the provider kind identifier was renamed.

  • No override dict or special-case logic needed.

    Closes Together AI model run fails with InvalidSecretsV0Error #3659

Note for maintainers

  • sdk/agenta/client/backend/types/standard_provider_kind.py and custom_provider_kind.py are auto-generated by Fern, the Fern API definition should be updated to use togetherai before the next fern generate run.

    Test plan

    • Verified locally: Together AI model now receives the API key.
    • Backward compat: old DB records with together_ai are normalized on read via SecretDTO validator (both
      PROVIDER_KEY and CUSTOM_PROVIDER paths).

Open with Devin

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Feb 12, 2026
@vercel
Copy link

vercel bot commented Feb 12, 2026

@WassimAkkacha is attempting to deploy a commit to the agenta projects Team on Vercel.

A member of the Team first needs to authorize it.

@CLAassistant
Copy link

CLAassistant commented Feb 12, 2026

CLA assistant check
All committers have signed the CLA.

@dosubot dosubot bot added bug Something isn't working SDK labels Feb 12, 2026
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 3 additional findings.

Open in Devin Review

Copy link
Member

@mmabrouk mmabrouk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @WassimAkkacha for the PR!
Please see the comments

Please also don't forget to sign the CLA to be able to merge your PR

Thanks again for your contribution!

for provider_kind in _PROVIDER_KINDS:
provider = provider_kind
key_name = f"{provider.upper()}_API_KEY"
key_name = _PROVIDER_ENV_VAR_OVERRIDES.get(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not do it like this. The readability is quite hard.

Instead I'd keep key_name as is and then override the key_name later (TOGETHER_AI_API_KEY-> TOGETHERAI_API_KEY).

But in any case, looking at this solution. I think it would be much better to instead do the fix from the frontend side to rename it that it fits litellm instead of adding complexity here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback on readability, happy to refactor that.
On the frontend rename, just want to make sure I understand correctly. Are you suggesting renaming the provider kind from together_ai to togetherai across the stack (StandardProviderKind, frontend maps, API enums) so the default upper() pattern works? Or something else? Want to make sure we're aligned before I rework the approach.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes exactly, I was suggesting that the right solution is to not do this but instead align the name of the provider to litellms across the stack so that we don't have anywhere special logic for it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, I'll rename "together_ai" to "togetherai" across the stack.

@WassimAkkacha WassimAkkacha force-pushed the fix/together-ai-env-var-lookup branch from a3a1c81 to 9e7e6ca Compare February 18, 2026 08:44
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:XS This PR changes 0-9 lines, ignoring generated files. labels Feb 18, 2026
@WassimAkkacha WassimAkkacha force-pushed the fix/together-ai-env-var-lookup branch from 9e7e6ca to 23314e5 Compare February 18, 2026 10:02
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 4 additional findings in Devin Review.

Open in Devin Review

Comment on lines +75 to 81
# Fix inconsistent API naming - normalize 'together_ai' to 'togetherai'
if data.get("kind", "") == "together_ai":
data["kind"] = "togetherai"

if not isinstance(data, dict):
raise ValueError(
"The provided request secret dto is not a valid type for StandardProviderDTO"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Normalization calls data.get() before verifying data is a dict in the PROVIDER_KEY branch

The new normalization code at line 76 calls data.get("kind", "") which assumes data is a dict, but the isinstance(data, dict) guard doesn't happen until line 79. If a caller passes a non-dict, non-BaseModel value for data (e.g. a string or list), the .get() call will raise an AttributeError instead of the intended clean ValueError from the isinstance check.

Root Cause and Impact

In mode="before" Pydantic validators, values haven't been type-checked yet so data could be any raw input. On line 69, data = values.get("data", {}) defaults to {}, and BaseModel instances are converted on lines 70-72, so data is usually a dict. However, if someone passes e.g. {"kind": "provider_key", "data": "not_a_dict"}, the code reaches:

if data.get("kind", "") == "together_ai":  # AttributeError: 'str' has no attribute 'get'
    data["kind"] = "togetherai"

if not isinstance(data, dict):  # never reached
    raise ValueError("...")

The fix should move the normalization block after the isinstance(data, dict) check (or swap their order), so the validation error is raised cleanly. Note: the CUSTOM_PROVIDER branch at lines 93-97 has this same pre-existing issue, but for PROVIDER_KEY this ordering problem is newly introduced by this PR.

Impact: Malformed API requests get an unhandled AttributeError instead of a descriptive ValueError, resulting in a 500 Internal Server Error rather than a 422 Validation Error.

(Refers to lines 75-82)

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working SDK size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Together AI model run fails with InvalidSecretsV0Error

3 participants