Skip to content

Commit 422b8cb

Browse files
committed
Unify providers config
1 parent 5266ce0 commit 422b8cb

File tree

15 files changed

+299
-287
lines changed

15 files changed

+299
-287
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44

55
- Refactor config for better UX and understanding:
66
- Move `models` to inside `providers`.
7+
- Make `customProviders` compatible with `providers`. models need to be a map now, not a list.
78

89
## 0.31.0
910

docs/configuration.md

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -48,9 +48,9 @@ There are multiples ways to configure ECA:
4848
ECA_CONFIG='{"myConfig": "my_value"}' eca server
4949
```
5050

51-
## Models
51+
## Providers / Models
5252

53-
For models configuration check the [dedicated models section](./models.md#adding-and-configuring-models).
53+
For providers and models configuration check the [dedicated models section](./models.md#adding-and-configuring-models).
5454

5555
## MCP
5656

@@ -154,8 +154,12 @@ There are 3 possible ways to configure rules following this order of priority:
154154
```typescript
155155
interface Config {
156156
providers: {[key: string]: {
157+
api?: 'openai-responses' | 'openai-chat' | 'anthropic';
157158
url?: string;
159+
urlEnv?: string;
158160
key?: string; // when provider supports api key.
161+
keyEnv?: string;
162+
completionUrlRelativePath?: string;
159163
models: {[key: string]: {
160164
extraPayload?: {[key: string]: any}
161165
}};
@@ -180,15 +184,6 @@ There are 3 possible ways to configure rules following this order of priority:
180184
args?: string[];
181185
disabled?: boolean;
182186
}};
183-
customProviders: {[key: string]: {
184-
api: 'openai-responses' | 'openai-chat' | 'anthropic';
185-
models: string[];
186-
url?: string;
187-
urlEnv?: string;
188-
completionUrlRelativePath?: string;
189-
key?: string;
190-
keyEnv?: string;
191-
}};
192187
chat?: {
193188
welcomeMessage: string;
194189
};
@@ -230,7 +225,6 @@ There are 3 possible ways to configure rules following this order of priority:
230225
},
231226
"mcpTimeoutSeconds" : 60,
232227
"mcpServers" : {},
233-
"customProviders": {},
234228
"chat" : {
235229
"welcomeMessage" : "Welcome to ECA!\n\nType '/' for commands\n\n"
236230
},

docs/models.md

Lines changed: 79 additions & 100 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
# Models
22

3-
The models capabilities and configurations are retrieved from [models.dev](https://models.dev) API.
3+
All providers and models are configured under `providers` config.
4+
5+
Models capabilities and configurations are retrieved from [models.dev](https://models.dev) API.
46

57
## Built-in providers and capabilities
68

@@ -11,78 +13,81 @@ The models capabilities and configurations are retrieved from [models.dev](https
1113
| Github Copilot |||| X |
1214
| Ollama local models ||| X | X |
1315

14-
### Adding and Configuring Models
15-
16-
#### Setting up your first model
1716

18-
To start using ECA, you need to configure at least one model with your API key. Here's how to set up a model:
17+
### Built-in providers config
1918

20-
1. **Choose your model**: Pick from [OpenAI](#openai), [Anthropic](#anthropic), or [Ollama](#ollama) models
21-
2. **Set your API key**: Create a configuration file with your credentials
22-
3. **Start using ECA**: The model will be available in your editor
19+
Built-in providers have already base initial `providers` configs, so you can change to add models or set its key/url.
2320

24-
#### Setting up API keys
21+
For more details, check the [config schema](./configuration.md#schema).
2522

26-
Create a configuration file at `.eca/config.json` in your project root or at `~/.config/eca/config.json` globally:
23+
Example:
2724

25+
`~/.config/eca/config.json`
2826
```javascript
2927
{
3028
"providers": {
31-
"openai": {"key": "your-openai-api-key-here"},
32-
"anthropic": {"key": "your-anthropic-api-key-here"}
29+
"openai": {
30+
"key": "your-openai-key-here", // configuring a key
31+
"models": {
32+
"o1": {} // adding models to a built-in provider
33+
"o3": {
34+
"extraPayload": { // adding to the payload sent to LLM
35+
"temperature": 0.5
36+
}
37+
}
38+
}
39+
}
3340
}
3441
}
3542
```
3643

37-
**Environment Variables**: You can also set API keys using environment variables:
44+
**Environment Variables**: You can also set API keys using environment variables following `"<PROVIDER>_API_KEY"`, examples:
45+
3846
- `OPENAI_API_KEY` for OpenAI
3947
- `ANTHROPIC_API_KEY` for Anthropic
4048

41-
#### Adding new models
49+
## Custom providers
4250

43-
You can add new models or merge with existing ones in your configuration:
51+
ECA allows you to configure custom LLM providers that follow API schemas similar to OpenAI or Anthropic. This is useful when you want to use:
4452

45-
```javascript
46-
{
47-
"providers": {
48-
"openai": {"key": "your-openai-api-key-here",
49-
"models": {"o1": {}}}
50-
}
51-
}
52-
```
53+
- Self-hosted LLM servers (like LiteLLM)
54+
- Custom company LLM endpoints
55+
- Additional cloud providers not natively supported
56+
57+
You just need to add your provider to `providers` and make sure add the required fields
5358

54-
#### Customizing model behavior
59+
Schema:
5560

56-
You can customize model parameters like temperature, reasoning effort, etc.:
61+
| Option | Type | Description | Required |
62+
|-------------------------------|--------|---------------------------------------------------------------------------------|----------|
63+
| `api` | string | The API schema to use (`"openai-responses"`, `"openai-chat"`, or `"anthropic"`) | Yes |
64+
| `urlEnv` | string | Environment variable name containing the API URL | Yes* |
65+
| `url` | string | Direct API URL (use instead of `urlEnv`) | Yes* |
66+
| `keyEnv` | string | Environment variable name containing the API key | Yes* |
67+
| `key` | string | Direct API key (use instead of `keyEnv`) | Yes* |
68+
| `models` | map | Key: model name, value: its config | Yes |
69+
| `models <model> extraPayload` | map | Extra payload sent in body to LLM | No |
5770

71+
Example:
72+
73+
`~/.config/eca/config.json`
5874
```javascript
5975
{
6076
"providers": {
61-
"openai": {
62-
"key": "your-openai-api-key-here",
63-
"gpt-5": {
64-
"extraPayload": {
65-
"temperature": 0.7,
66-
"reasoning_effort": "high",
67-
"max_tokens": 4000
68-
}
69-
}
70-
}
77+
"my-company": {
78+
"api": "openai-chat",
79+
"urlEnv": "MY_COMPANY_API_URL", // or "url"
80+
"keyEnv": "MY_COMPANY_API_KEY", // or "key"
81+
"models": {
82+
"gpt-5": {},
83+
"deepseek-r1": {}
84+
}
85+
}
7186
}
7287
}
7388
```
7489

75-
This config will be merged with current default used by ECA.
76-
77-
## Custom model providers
78-
79-
ECA allows you to configure custom LLM providers that follow API schemas similar to OpenAI or Anthropic. This is useful when you want to use:
80-
81-
- Self-hosted LLM servers (like LiteLLM)
82-
- Custom company LLM endpoints
83-
- Additional cloud providers not natively supported
84-
85-
### API Types for Custom Providers
90+
### API Types
8691

8792
When configuring custom providers, choose the appropriate API type:
8893

@@ -98,44 +103,7 @@ When configuring custom providers, choose the appropriate API type:
98103

99104
Most third-party providers use the `openai-chat` API for compatibility with existing tools and libraries.
100105

101-
### Setting up a custom provider
102-
103-
It's possible to configure ECA to be aware of custom LLM providers if they follow a API schema similar to currently supported ones (openai-responses, openai-chat or anthropic), example for a custom hosted litellm server:
104-
105-
Example:
106-
107-
`~/.config/eca/config.json`
108-
```javascript
109-
{
110-
"customProviders": {
111-
"my-company": {
112-
"api": "openai-chat",
113-
"urlEnv": "MY_COMPANY_API_URL", // or "url"
114-
"keyEnv": "MY_COMPANY_API_KEY", // or "key"
115-
"models": ["gpt-5", "deepseek-r1"]
116-
}
117-
}
118-
}
119-
```
120-
121-
### Custom provider configuration options
122-
123-
| Option | Type | Description | Required |
124-
|--------|------|-------------|----------|
125-
| `api` | string | The API schema to use (`"openai-responses"`, `"openai-chat"`, or `"anthropic"`) | Yes |
126-
| `urlEnv` | string | Environment variable name containing the API URL | Yes* |
127-
| `url` | string | Direct API URL (use instead of `urlEnv`) | Yes* |
128-
| `keyEnv` | string | Environment variable name containing the API key | Yes* |
129-
| `key` | string | Direct API key (use instead of `keyEnv`) | Yes* |
130-
| `models` | array | List of available model names | Yes |
131-
| `completionUrlRelativePath` | string | Custom endpoint path for completions | No |
132-
133-
_* Either the `url` or `urlEnv` option is required, and either the `key` or `keyEnv` option is required._
134-
135-
136-
After configuring custom providers, the models will be available as `provider/model` (e.g., `openrouter/anthropic/claude-3.5-sonnet`, `deepseek/deepseek-chat`).
137-
138-
### Providers setup
106+
## Providers
139107

140108
=== "Github Copilot"
141109

@@ -149,12 +117,15 @@ After configuring custom providers, the models will be available as `provider/mo
149117

150118
```javascript
151119
{
152-
"customProviders": {
120+
"providers": {
153121
"litellm": {
154-
"api": "openai-responses",
155-
"url": "https://litellm.my-company.com",
156-
"key": "your-api-key",
157-
"models": ["gpt-5", "claude-3-sonnet-20240229", "llama-3-70b"]
122+
"api": "openai-responses",
123+
"url": "https://litellm.my-company.com", // or "urlEnv"
124+
"key": "your-api-key", // or "keyEnv"
125+
"models": {
126+
"gpt-5": {},
127+
"deepseek-r1": {}
128+
}
158129
}
159130
}
160131
}
@@ -163,15 +134,19 @@ After configuring custom providers, the models will be available as `provider/mo
163134
=== "OpenRouter"
164135

165136
[OpenRouter](https://openrouter.ai) provides access to many models through a unified API:
166-
137+
167138
```javascript
168139
{
169-
"customProviders": {
140+
"providers": {
170141
"openrouter": {
171142
"api": "openai-chat",
172-
"url": "https://openrouter.ai/api/v1",
173-
"keyEnv": "OPENROUTER_API_KEY",
174-
"models": ["anthropic/claude-3.5-sonnet", "openai/gpt-4-turbo", "meta-llama/llama-3.1-405b"]
143+
"url": "https://openrouter.ai/api/v1", // or "urlEnv"
144+
"key": "your-api-key", // or "keyEnv"
145+
"models": {
146+
"anthropic/claude-3.5-sonnet": {},
147+
"openai/gpt-4-turbo": {},
148+
"meta-llama/llama-3.1-405b": {}
149+
}
175150
}
176151
}
177152
}
@@ -180,15 +155,19 @@ After configuring custom providers, the models will be available as `provider/mo
180155
=== "DeepSeek"
181156

182157
[DeepSeek](https://deepseek.com) offers powerful reasoning and coding models:
183-
158+
184159
```javascript
185160
{
186-
"customProviders": {
187-
"deepseek": {
161+
"providers": {
162+
"openrouter": {
188163
"api": "openai-chat",
189-
"url": "https://api.deepseek.com",
190-
"keyEnv": "DEEPSEEK_API_KEY",
191-
"models": ["deepseek-chat", "deepseek-coder", "deepseek-reasoner"]
164+
"url": "https://api.deepseek.com", // or "urlEnv"
165+
"key": "your-api-key", // or "keyEnv"
166+
"models": {
167+
"deepseek-chat": {},
168+
"deepseek-coder": {},
169+
"deepseek-reasoner": {}
170+
}
192171
}
193172
}
194173
}

0 commit comments

Comments
 (0)