Skip to content

Commit 436238d

Browse files
committed
docs: update LLM configuration documentation to emphasize environment-driven setup and clarify provider priority management
1 parent d9f52e5 commit 436238d

File tree

3 files changed

+45
-31
lines changed

3 files changed

+45
-31
lines changed

docs/api-reference/graph/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ SpoonOS's graph system enables:
1515
## Core Components
1616

1717
### [StateGraph](state-graph.md)
18-
The main graph execution engine providing LangGraph-style workflow orchestration.
18+
The main graph execution engine providing workflow orchestration.
1919

2020
**Key Features:**
2121
- Node and edge management

docs/api-reference/llm/config-manager.md

Lines changed: 17 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
The `ConfigurationManager` handles loading, validation, and management of LLM provider configurations from various sources including environment variables, configuration files, and runtime settings.
44

5+
> **Note (Nov 2025):** The core SDK now defaults to environment-driven configuration. Use the `spoon-cli` configuration manager (or set environment variables manually) to sync `config.json` values before instantiating `ConfigurationManager()`.
6+
57
## Class Definition
68

79
```python
@@ -57,8 +59,13 @@ Load configuration from a JSON or TOML file.
5759

5860
**Example:**
5961
```python
62+
import os
63+
from spoon_ai.llm import ConfigurationManager
64+
65+
# Populate required environment variables before instantiating the manager
66+
os.environ["OPENAI_API_KEY"] = "sk-..."
67+
6068
config_manager = ConfigurationManager()
61-
config = config_manager.load_from_file("config.json")
6269
```
6370

6471
### `load_from_env() -> Dict[str, Any]`
@@ -78,6 +85,13 @@ Load configuration from environment variables.
7885
config = config_manager.load_from_env()
7986
```
8087

88+
Environment overrides also control provider priority:
89+
90+
- `DEFAULT_LLM_PROVIDER` selects the preferred provider (e.g. `anthropic`).
91+
- `LLM_FALLBACK_CHAIN` lists comma-separated providers for cascading retries (e.g. `anthropic,openai,gemini`).
92+
93+
When using `spoon-cli`, these variables are exported automatically after `config.json` loads. If you instantiate the SDK directly, set them yourself before calling `ConfigurationManager()`.
94+
8195
### `merge_configs(base_config: Dict, override_config: Dict) -> Dict[str, Any]`
8296

8397
Merge two configurations with override priority.
@@ -394,7 +408,7 @@ decrypted = config_manager.decrypt_config(encrypted_config)
394408
import os
395409
from spoon_ai.llm import ConfigurationManager
396410

397-
config_manager = ConfigurationManager()
411+
config_manager = ConfigurationManager() # environment-first configuration
398412

399413
# Secure: Load from environment
400414
config_manager.set_provider_config("openai", {
@@ -432,27 +446,12 @@ llm_manager = LLMManager(config_manager=config_manager)
432446

433447
## Integration Examples
434448

435-
### With LLMManager
436-
437-
```python
438-
from spoon_ai.llm import ConfigurationManager, LLMManager
439-
440-
# Initialize configuration
441-
config_manager = ConfigurationManager("config.json")
442-
443-
# Create LLM manager with configuration
444-
llm_manager = LLMManager(config_manager=config_manager)
445-
446-
# Configuration changes are automatically picked up
447-
response = await llm_manager.chat(messages)
448-
```
449-
450449
### Programmatic Configuration
451450

452451
```python
453452
from spoon_ai.llm import ConfigurationManager
454453

455-
config_manager = ConfigurationManager()
454+
config_manager = ConfigurationManager() # defaults to environment variables
456455

457456
# Configure providers programmatically
458457
providers = {

docs/api-reference/llm/index.md

Lines changed: 27 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ The LLM (Large Language Model) system in SpoonOS provides a unified, provider-ag
66

77
SpoonOS's LLM system offers:
88

9+
- > **Note (Nov 2025):** The core Python SDK reads provider settings from environment variables. The `spoon-cli` toolchain loads `config.json` and exports those values into the environment automatically. When using the SDK directly, set the relevant `*_API_KEY`, `*_BASE_URL`, and related environment variables before creating `ConfigurationManager()`.
10+
911
- **Provider Agnosticism**: Unified API across all providers
1012
- **Automatic Fallback**: Intelligent provider switching on failures
1113
- **Load Balancing**: Distribute requests across multiple providers
@@ -59,10 +61,14 @@ Handles configuration loading, validation, and management from multiple sources.
5961
- Configuration templates and merging
6062

6163
```python
64+
import os
6265
from spoon_ai.llm import ConfigurationManager
6366

64-
config_manager = ConfigurationManager("config.json")
65-
config_manager.set_provider_config("openai", {...})
67+
# Export provider settings into environment variables
68+
os.environ["OPENAI_API_KEY"] = "sk-..."
69+
os.environ["DEFAULT_LLM_PROVIDER"] = "openai"
70+
71+
config_manager = ConfigurationManager()
6672
```
6773

6874
## Quick Start
@@ -82,19 +88,28 @@ response = await llm_manager.chat(messages)
8288
print(response.content)
8389
```
8490

85-
### With Configuration
8691

87-
```python
88-
from spoon_ai.llm import ConfigurationManager, LLMManager
92+
### Controlling Provider Priority
93+
94+
You can steer which provider is used first—and how the system falls back—purely via environment variables:
95+
96+
```bash
97+
# Prefer Anthropic by default
98+
export DEFAULT_LLM_PROVIDER=anthropic
99+
100+
# Allow fallback to OpenAI, then Gemini
101+
export LLM_FALLBACK_CHAIN="anthropic,openai,gemini"
102+
```
89103

90-
# Load configuration
91-
config_manager = ConfigurationManager("config.json")
92-
llm_manager = LLMManager(config_manager=config_manager)
104+
On Windows PowerShell:
93105

94-
# Chat with specific provider
95-
response = await llm_manager.chat(messages, provider="openai")
106+
```powershell
107+
$env:DEFAULT_LLM_PROVIDER = "anthropic"
108+
$env:LLM_FALLBACK_CHAIN = "anthropic,openai,gemini"
96109
```
97110

111+
After setting the variables, simply instantiate `ConfigurationManager()` as usual; no code changes are needed. The `spoon-cli` configuration workflow writes these variables for you whenever it loads `config.json`.
112+
98113
### Streaming Responses
99114

100115
```python
@@ -240,7 +255,7 @@ LLM_RETRY_ATTEMPTS=3
240255
```python
241256
from spoon_ai.llm import ConfigurationManager
242257

243-
config_manager = ConfigurationManager()
258+
config_manager = ConfigurationManager() # uses environment variables by default
244259

245260
# Configure providers
246261
config_manager.set_provider_config("openai", {
@@ -515,7 +530,7 @@ llm_manager.set_primary_provider("gemini") # Generally faster
515530
```python
516531
from spoon_ai.llm import ConfigurationManager
517532

518-
config_manager = ConfigurationManager()
533+
config_manager = ConfigurationManager() # refreshes from environment variables
519534
errors = config_manager.validate_config(your_config)
520535
for error in errors:
521536
print(f"Config error: {error}")

0 commit comments

Comments
 (0)