Skip to content

Commit b26747c

Browse files
committed
Support multiple model configs with different payloads using same model name via modelName config.
1 parent 00a75c2 commit b26747c

File tree

9 files changed

+93
-83
lines changed

9 files changed

+93
-83
lines changed

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
## Unreleased
44

5+
- Support multiple model configs with different payloads using same model name via `modelName` config. (Ex: gpt-5 and gpt-5-high but both use gpt-5)
6+
57
## 0.68.1
68

79
- Add `anthropic/haiku-4.5` model by default.

docs/configuration.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -382,6 +382,7 @@ To configure, add your OTLP collector config via `:otlp` map following [otlp aut
382382
keyRc?: string; // credential file lookup in format [login@]machine[:port]
383383
completionUrlRelativePath?: string;
384384
models: {[key: string]: {
385+
modelName?: string;
385386
extraPayload?: {[key: string]: any}
386387
}};
387388
}};

docs/models.md

Lines changed: 25 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -62,17 +62,18 @@ You just need to add your provider to `providers` and make sure add the required
6262

6363
Schema:
6464

65-
| Option | Type | Description | Required |
66-
|--------------------------------|--------|-----------------------------------------------------------------------------------------------------|----------|
67-
| `api` | string | The API schema to use (`"openai-responses"`, `"openai-chat"`, or `"anthropic"`) | Yes |
68-
| `urlEnv` | string | Environment variable name containing the API URL | No* |
69-
| `url` | string | Direct API URL (use instead of `urlEnv`) | No* |
70-
| `keyEnv` | string | Environment variable name containing the API key | No* |
71-
| `keyRc` | string | Lookup specification to read the API key from Unix RC [credential files](#credential-file-authentication) | No* |
72-
| `key` | string | Direct API key (use instead of `keyEnv`) | No* |
73-
| `completionUrlRelativePath` | string | Optional override for the completion endpoint path (see defaults below and examples like Azure) | No |
74-
| `models` | map | Key: model name, value: its config | Yes |
75-
| `models <model> extraPayload` | map | Extra payload sent in body to LLM | No |
65+
| Option | Type | Description | Required |
66+
|-------------------------------|--------|--------------------------------------------------------------------------------------------------------------|----------|
67+
| `api` | string | The API schema to use (`"openai-responses"`, `"openai-chat"`, or `"anthropic"`) | Yes |
68+
| `urlEnv` | string | Environment variable name containing the API URL | No* |
69+
| `url` | string | Direct API URL (use instead of `urlEnv`) | No* |
70+
| `keyEnv` | string | Environment variable name containing the API key | No* |
71+
| `keyRc` | string | Lookup specification to read the API key from Unix RC [credential files](#credential-file-authentication) | No* |
72+
| `key` | string | Direct API key (use instead of `keyEnv`) | No* |
73+
| `completionUrlRelativePath` | string | Optional override for the completion endpoint path (see defaults below and examples like Azure) | No |
74+
| `models` | map | Key: model name, value: its config | Yes |
75+
| `models <model> extraPayload` | map | Extra payload sent in body to LLM | No |
76+
| `models <model> modelName` | string | Override model name, useful to have multiple models with different configs and names that use same LLM model | No |
7677

7778
_* url and key will be search as env `<provider>_API_URL` / `<provider>_API_KEY`, but require config or to be found to work._
7879

@@ -305,25 +306,21 @@ Notes:
305306

306307
=== "Same model with different settings"
307308

308-
For now, you can create different providers with same model names to achieve that:
309+
Using `modelName`, you can configure multiple model names using same model with different settings:
309310

310311
```javascript
311312
{
312-
"providers": {
313-
"openai": {
314-
"api": "openai-responses",
315-
"models": { "gpt-5": {} }
316-
},
317-
"openai-high": {
318-
"api": "openai-responses",
319-
"url": "https://api.openai.com",
320-
"keyEnv": "OPENAI_API_KEY",
321-
"models": {
322-
"gpt-5": {
323-
"extraPayload": { "reasoning": { "effort": "high" } }
324-
}
325-
}
326-
}
327-
}
313+
"providers": {
314+
"openai": {
315+
"api": "openai-responses",
316+
"models": {
317+
"gpt-5": {},
318+
"gpt-5-high": {
319+
"modelName": "gpt-5",
320+
"extraPayload": { "reasoning": {"effort": "high"}}
321+
}
322+
}
323+
}
324+
}
328325
}
329326
```

src/eca/llm_api.clj

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,9 @@
6666
(assoc (select-keys tool [:name :description :parameters])
6767
:type "function"))
6868

69+
(defn ^:private real-model-name [model model-capabilities]
70+
(or (:model-name model-capabilities) model))
71+
6972
(defn complete!
7073
[{:keys [provider model model-capabilities instructions user-messages config on-first-response-received
7174
on-message-received on-error on-prepare-tool-call on-tools-called on-reason on-usage-updated
@@ -95,6 +98,7 @@
9598
(when-not (:silent? (ex-data exception))
9699
(logger/error args)
97100
(on-error args)))
101+
real-model (real-model-name model model-capabilities)
98102
tools (when (:tools model-capabilities)
99103
(mapv tool->llm-tool tools))
100104
reason? (:reason? model-capabilities)
@@ -118,7 +122,7 @@
118122
(cond
119123
(= "openai" provider)
120124
(llm-providers.openai/completion!
121-
{:model model
125+
{:model real-model
122126
:instructions instructions
123127
:user-messages user-messages
124128
:max-output-tokens max-output-tokens
@@ -135,7 +139,7 @@
135139

136140
(= "anthropic" provider)
137141
(llm-providers.anthropic/completion!
138-
{:model model
142+
{:model real-model
139143
:instructions instructions
140144
:user-messages user-messages
141145
:max-output-tokens max-output-tokens
@@ -152,7 +156,7 @@
152156

153157
(= "github-copilot" provider)
154158
(llm-providers.openai-chat/completion!
155-
{:model model
159+
{:model real-model
156160
:instructions instructions
157161
:user-messages user-messages
158162
:max-output-tokens max-output-tokens
@@ -173,7 +177,7 @@
173177

174178
(= "google" provider)
175179
(llm-providers.openai-chat/completion!
176-
{:model model
180+
{:model real-model
177181
:instructions instructions
178182
:user-messages user-messages
179183
:max-output-tokens max-output-tokens
@@ -196,7 +200,7 @@
196200
{:api-url api-url
197201
:reason? (:reason? model-capabilities)
198202
:supports-image? supports-image?
199-
:model model
203+
:model real-model
200204
:instructions instructions
201205
:user-messages user-messages
202206
:past-messages past-messages
@@ -213,7 +217,7 @@
213217
(on-error-wrapper {:message (format "Unknown model %s for provider %s" (:api provider-config) provider)}))
214218
url-relative-path (:completionUrlRelativePath provider-config)]
215219
(provider-fn
216-
{:model model
220+
{:model real-model
217221
:instructions instructions
218222
:user-messages user-messages
219223
:max-output-tokens max-output-tokens
@@ -228,7 +232,7 @@
228232
callbacks))
229233

230234
:else
231-
(on-error-wrapper {:message (format "ECA Unsupported model %s for provider %s" model provider)}))
235+
(on-error-wrapper {:message (format "ECA Unsupported model %s for provider %s" real-model provider)}))
232236
(catch Exception e
233237
(on-error-wrapper {:exception e})))))
234238

src/eca/llm_providers/anthropic.clj

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[eca.features.login :as f.login]
88
[eca.llm-util :as llm-util]
99
[eca.logger :as logger]
10-
[eca.shared :as shared :refer [assoc-some multi-str]]
10+
[eca.shared :as shared :refer [assoc-some deep-merge multi-str]]
1111
[hato.client :as http]
1212
[ring.util.codec :as ring.util]))
1313

@@ -155,17 +155,18 @@
155155
{:keys [on-message-received on-error on-reason on-prepare-tool-call on-tools-called on-usage-updated]}]
156156
(let [messages (concat (normalize-messages past-messages supports-image?)
157157
(normalize-messages (fix-non-thinking-assistant-messages user-messages) supports-image?))
158-
body (merge (assoc-some
159-
{:model model
160-
:messages (add-cache-to-last-message messages)
161-
:max_tokens (or max-output-tokens 32000)
162-
:stream true
163-
:tools (->tools tools web-search)
164-
:system [{:type "text" :text "You are Claude Code, Anthropic's official CLI for Claude."}
165-
{:type "text" :text instructions :cache_control {:type "ephemeral"}}]}
166-
:thinking (when reason?
167-
{:type "enabled" :budget_tokens 2048}))
168-
extra-payload)
158+
body (deep-merge
159+
(assoc-some
160+
{:model model
161+
:messages (add-cache-to-last-message messages)
162+
:max_tokens (or max-output-tokens 32000)
163+
:stream true
164+
:tools (->tools tools web-search)
165+
:system [{:type "text" :text "You are Claude Code, Anthropic's official CLI for Claude."}
166+
{:type "text" :text instructions :cache_control {:type "ephemeral"}}]}
167+
:thinking (when reason?
168+
{:type "enabled" :budget_tokens 2048}))
169+
extra-payload)
169170

170171
on-response-fn
171172
(fn handle-response [event data content-block* reason-id]

src/eca/llm_providers/ollama.clj

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
[clojure.java.io :as io]
55
[eca.llm-util :as llm-util]
66
[eca.logger :as logger]
7+
[eca.shared :refer [deep-merge]]
78
[hato.client :as http]))
89

910
(set! *warn-on-reflection* true)
@@ -103,12 +104,13 @@
103104
(let [messages (concat
104105
(normalize-messages (concat [{:role "system" :content instructions}] past-messages))
105106
(normalize-messages user-messages))
106-
body (merge {:model model
107-
:messages messages
108-
:think reason?
109-
:tools (->tools tools)
110-
:stream true}
111-
extra-payload)
107+
body (deep-merge
108+
{:model model
109+
:messages messages
110+
:think reason?
111+
:tools (->tools tools)
112+
:stream true}
113+
extra-payload)
112114
url (format chat-url api-url)
113115
tool-calls* (atom {})
114116
on-response-fn (fn handle-response [rid _event data reasoning?* reason-id]

src/eca/llm_providers/openai.clj

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
[eca.llm-util :as llm-util]
99
[eca.logger :as logger]
1010
[eca.oauth :as oauth]
11-
[eca.shared :refer [assoc-some multi-str]]
11+
[eca.shared :refer [assoc-some deep-merge multi-str]]
1212
[hato.client :as http]
1313
[ring.util.codec :as ring.util]))
1414

@@ -106,24 +106,25 @@
106106
(normalize-messages user-messages supports-image?))
107107
tools (cond-> tools
108108
web-search (conj {:type "web_search_preview"}))
109-
body (merge {:model model
110-
:input input
111-
:prompt_cache_key (str (System/getProperty "user.name") "@ECA")
112-
:parallel_tool_calls true
113-
:instructions (if (= :auth/oauth auth-type)
114-
(str "You are Codex." instructions)
115-
instructions)
116-
:tools tools
117-
:include (when reason?
118-
["reasoning.encrypted_content"])
119-
:store false
120-
:reasoning (when reason?
121-
{:effort "medium"
122-
:summary "detailed"})
123-
:stream true
109+
body (deep-merge
110+
{:model model
111+
:input input
112+
:prompt_cache_key (str (System/getProperty "user.name") "@ECA")
113+
:parallel_tool_calls true
114+
:instructions (if (= :auth/oauth auth-type)
115+
(str "You are Codex." instructions)
116+
instructions)
117+
:tools tools
118+
:include (when reason?
119+
["reasoning.encrypted_content"])
120+
:store false
121+
:reasoning (when reason?
122+
{:effort "medium"
123+
:summary "detailed"})
124+
:stream true
124125
;; :verbosity "medium"
125-
:max_output_tokens max-output-tokens}
126-
extra-payload)
126+
:max_output_tokens max-output-tokens}
127+
extra-payload)
127128
tool-call-by-item-id* (atom {})
128129
on-response-fn
129130
(fn handle-response [event data]

src/eca/llm_providers/openai_chat.clj

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[clojure.string :as string]
66
[eca.llm-util :as llm-util]
77
[eca.logger :as logger]
8-
[eca.shared :refer [assoc-some]]
8+
[eca.shared :refer [assoc-some deep-merge]]
99
[hato.client :as http]))
1010

1111
(set! *warn-on-reflection* true)
@@ -280,15 +280,16 @@
280280
(normalize-messages past-messages supports-image? thinking-start-tag thinking-end-tag)
281281
(normalize-messages user-messages supports-image? thinking-start-tag thinking-end-tag)))
282282

283-
body (merge (assoc-some
284-
{:model model
285-
:messages messages
286-
:temperature temperature
287-
:stream true
288-
:parallel_tool_calls parallel-tool-calls?
289-
:max_completion_tokens 32000}
290-
:tools (when (seq tools) (->tools tools)))
291-
extra-payload)
283+
body (deep-merge
284+
(assoc-some
285+
{:model model
286+
:messages messages
287+
:temperature temperature
288+
:stream true
289+
:parallel_tool_calls parallel-tool-calls?
290+
:max_completion_tokens 32000}
291+
:tools (when (seq tools) (->tools tools)))
292+
extra-payload)
292293

293294
;; Atom to accumulate tool call data from streaming chunks.
294295
;; OpenAI streams tool call arguments across multiple chunks, so we need to

src/eca/models.clj

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@
7676
(fn [p [provider provider-config]]
7777
(merge p
7878
(reduce
79-
(fn [m [model _model-config]]
79+
(fn [m [model model-config]]
8080
(let [full-model (str provider "/" model)
8181
model-capabilities (merge
8282
(or (get all-models full-model)
@@ -86,7 +86,8 @@
8686
(shared/normalize-model-name (second (string/split % #"/" 2))))
8787
(keys all-models)))]
8888
(get all-models found-full-model))
89-
{:tools true
89+
{:model-name (or (:modelName model-config) model)
90+
:tools true
9091
:reason? true
9192
:web-search true}))]
9293
(assoc m full-model model-capabilities)))

0 commit comments

Comments
 (0)