You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 30, 2024. It is now read-only.
Return model IDs from GraphQL, not model Names (#64307)
::sigh:: the problem here is super in the weeds, but ultimately this
fixes a problem introduced when using AWS Bedrock and Sourcegraph
instances using the older style "completions" config.
## The problem
AWS Bedrock has some LLM model names that contain a colon, e.g.
`anthropic.claude-3-opus-20240229-v1:0`. Cody clients connecting to
Sourcegraph instances using the older style "completions" config will
obtain the available LLM models by using GraphGL.
So the Cody client would see that the chat model is
`anthropic.claude-3-opus-20240229-v1:0`.
However, under the hood, the Sourcegraph instance will convert the site
config into the newer `modelconfig` format. And during that conversion,
we use a _different value_ for the **model ID** than what is in the site
config. (The **model name** is what is sent to the LLM API, and is
unmodified. The model ID is a stable, unique identifier but is sanitized
so that it adheres to naming rules.)
Because of this, we have a problem.
When the Cody client makes a request to the HTTP completions API with
the model name of `anthropic.claude-3-opus-20240229-v1:0` or
`anthropic/anthropic.claude-3-opus-20240229-v1:0` it fails. Because
there is no model with ID `...v1:0`. (We only have the sanitized
version, `...v1_0`.)
## The fix
There were a few ways we could fix this, but this goes with just having
the GraphQL component return the model ID instead of the model name. So
that when the Cody client passes that model ID to the completions API,
everything works as it should.
And, practically speaking, for 99.9% of cases, the model name and model
ID will be identical. We only strip out non-URL safe characters and
colons, which usually aren't used in model names.
## Potential bugs
With this fix however, there is a specific combination of { client,
server, and model name } where things could in theory break.
Specifically:
Client | Server | Modelname | Works |
--- | --- | --- | --- |
unaware-of-modelconfig | not-using-modelconfig | standard | 🟢 [1] |
aware-of-modelconfig | not-using-modelconfig | standard | 🟢 [1] |
unaware-of-modelconfig | using-modelconfig | standard | 🟢 [1] |
aware-of-modelconfig | using-modelconfig | standard | 🟢 [3] |
unaware-of-modelconfig | not-using-modelconfig | non-standard | 🔴 [2] |
aware-of-modelconfig | not-using-modelconfig | non-standard | 🔴 [2] |
unaware-of-modelconfig | using-modelconfig | non-standard | 🔴 [2] |
aware-of-modelconfig | using-modelconfig | non-standard | 🟢 [3] |
1. If the model name is something that doesn't require sanitization,
there is no problem. The model ID will be the same as the model name,
and things will work like they do today.
2. If the model name gets sanitized, then IFF the Cody client were to
make a decision based on that exact model name, it wouldn't work.
Because it would receive the sanitized name, and not the real one. As
long as the Cody client is only passing that model name onto the
Sourcegraph backend which will recognize the sanitized model name / ID,
all is well.
3. If the client and server are new, and using model config, then this
shouldn't be a problem because the client would use a different API to
fetch the Sourcegraph instance's supported models. And within the
client, natively refer to the model ID instead of the model name.
Fixes
[PRIME-464](https://linear.app/sourcegraph/issue/PRIME-464/aws-bedrock-x-completions-config-does-not-work-if-model-name-has-a).
## Test plan
Added some unit tests.
## Changelog
NA
0 commit comments