Skip to content

Commit 473daa4

Browse files
Merge branch 'XSpoonAi:main' into main
2 parents e834e29 + 436238d commit 473daa4

File tree

7 files changed

+144
-284
lines changed

7 files changed

+144
-284
lines changed

docs/api-reference/graph/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ SpoonOS's graph system enables:
1515
## Core Components
1616

1717
### [StateGraph](state-graph.md)
18-
The main graph execution engine providing LangGraph-style workflow orchestration.
18+
The main graph execution engine providing workflow orchestration.
1919

2020
**Key Features:**
2121
- Node and edge management

docs/api-reference/llm/config-manager.md

Lines changed: 17 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
The `ConfigurationManager` handles loading, validation, and management of LLM provider configurations from various sources including environment variables, configuration files, and runtime settings.
44

5+
> **Note (Nov 2025):** The core SDK now defaults to environment-driven configuration. Use the `spoon-cli` configuration manager (or set environment variables manually) to sync `config.json` values before instantiating `ConfigurationManager()`.
6+
57
## Class Definition
68

79
```python
@@ -57,8 +59,13 @@ Load configuration from a JSON or TOML file.
5759

5860
**Example:**
5961
```python
62+
import os
63+
from spoon_ai.llm import ConfigurationManager
64+
65+
# Populate required environment variables before instantiating the manager
66+
os.environ["OPENAI_API_KEY"] = "sk-..."
67+
6068
config_manager = ConfigurationManager()
61-
config = config_manager.load_from_file("config.json")
6269
```
6370

6471
### `load_from_env() -> Dict[str, Any]`
@@ -78,6 +85,13 @@ Load configuration from environment variables.
7885
config = config_manager.load_from_env()
7986
```
8087

88+
Environment overrides also control provider priority:
89+
90+
- `DEFAULT_LLM_PROVIDER` selects the preferred provider (e.g. `anthropic`).
91+
- `LLM_FALLBACK_CHAIN` lists comma-separated providers for cascading retries (e.g. `anthropic,openai,gemini`).
92+
93+
When using `spoon-cli`, these variables are exported automatically after `config.json` loads. If you instantiate the SDK directly, set them yourself before calling `ConfigurationManager()`.
94+
8195
### `merge_configs(base_config: Dict, override_config: Dict) -> Dict[str, Any]`
8296

8397
Merge two configurations with override priority.
@@ -394,7 +408,7 @@ decrypted = config_manager.decrypt_config(encrypted_config)
394408
import os
395409
from spoon_ai.llm import ConfigurationManager
396410

397-
config_manager = ConfigurationManager()
411+
config_manager = ConfigurationManager() # environment-first configuration
398412

399413
# Secure: Load from environment
400414
config_manager.set_provider_config("openai", {
@@ -432,27 +446,12 @@ llm_manager = LLMManager(config_manager=config_manager)
432446

433447
## Integration Examples
434448

435-
### With LLMManager
436-
437-
```python
438-
from spoon_ai.llm import ConfigurationManager, LLMManager
439-
440-
# Initialize configuration
441-
config_manager = ConfigurationManager("config.json")
442-
443-
# Create LLM manager with configuration
444-
llm_manager = LLMManager(config_manager=config_manager)
445-
446-
# Configuration changes are automatically picked up
447-
response = await llm_manager.chat(messages)
448-
```
449-
450449
### Programmatic Configuration
451450

452451
```python
453452
from spoon_ai.llm import ConfigurationManager
454453

455-
config_manager = ConfigurationManager()
454+
config_manager = ConfigurationManager() # defaults to environment variables
456455

457456
# Configure providers programmatically
458457
providers = {

docs/api-reference/llm/index.md

Lines changed: 27 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ The LLM (Large Language Model) system in SpoonOS provides a unified, provider-ag
66

77
SpoonOS's LLM system offers:
88

9+
- > **Note (Nov 2025):** The core Python SDK reads provider settings from environment variables. The `spoon-cli` toolchain loads `config.json` and exports those values into the environment automatically. When using the SDK directly, set the relevant `*_API_KEY`, `*_BASE_URL`, and related environment variables before creating `ConfigurationManager()`.
10+
911
- **Provider Agnosticism**: Unified API across all providers
1012
- **Automatic Fallback**: Intelligent provider switching on failures
1113
- **Load Balancing**: Distribute requests across multiple providers
@@ -59,10 +61,14 @@ Handles configuration loading, validation, and management from multiple sources.
5961
- Configuration templates and merging
6062

6163
```python
64+
import os
6265
from spoon_ai.llm import ConfigurationManager
6366

64-
config_manager = ConfigurationManager("config.json")
65-
config_manager.set_provider_config("openai", {...})
67+
# Export provider settings into environment variables
68+
os.environ["OPENAI_API_KEY"] = "sk-..."
69+
os.environ["DEFAULT_LLM_PROVIDER"] = "openai"
70+
71+
config_manager = ConfigurationManager()
6672
```
6773

6874
## Quick Start
@@ -82,19 +88,28 @@ response = await llm_manager.chat(messages)
8288
print(response.content)
8389
```
8490

85-
### With Configuration
8691

87-
```python
88-
from spoon_ai.llm import ConfigurationManager, LLMManager
92+
### Controlling Provider Priority
93+
94+
You can steer which provider is used first—and how the system falls back—purely via environment variables:
95+
96+
```bash
97+
# Prefer Anthropic by default
98+
export DEFAULT_LLM_PROVIDER=anthropic
99+
100+
# Allow fallback to OpenAI, then Gemini
101+
export LLM_FALLBACK_CHAIN="anthropic,openai,gemini"
102+
```
89103

90-
# Load configuration
91-
config_manager = ConfigurationManager("config.json")
92-
llm_manager = LLMManager(config_manager=config_manager)
104+
On Windows PowerShell:
93105

94-
# Chat with specific provider
95-
response = await llm_manager.chat(messages, provider="openai")
106+
```powershell
107+
$env:DEFAULT_LLM_PROVIDER = "anthropic"
108+
$env:LLM_FALLBACK_CHAIN = "anthropic,openai,gemini"
96109
```
97110

111+
After setting the variables, simply instantiate `ConfigurationManager()` as usual; no code changes are needed. The `spoon-cli` configuration workflow writes these variables for you whenever it loads `config.json`.
112+
98113
### Streaming Responses
99114

100115
```python
@@ -240,7 +255,7 @@ LLM_RETRY_ATTEMPTS=3
240255
```python
241256
from spoon_ai.llm import ConfigurationManager
242257

243-
config_manager = ConfigurationManager()
258+
config_manager = ConfigurationManager() # uses environment variables by default
244259

245260
# Configure providers
246261
config_manager.set_provider_config("openai", {
@@ -515,7 +530,7 @@ llm_manager.set_primary_provider("gemini") # Generally faster
515530
```python
516531
from spoon_ai.llm import ConfigurationManager
517532

518-
config_manager = ConfigurationManager()
533+
config_manager = ConfigurationManager() # refreshes from environment variables
519534
errors = config_manager.validate_config(your_config)
520535
for error in errors:
521536
print(f"Config error: {error}")

docs/core-concepts/agents-detailed.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ class MyAgent(SpoonReactAI):
154154
self.max_steps = 10
155155

156156
# Set up tools (if any)
157-
self.avaliable_tools = ToolManager([])
157+
self.available_tools = ToolManager([])
158158
```
159159

160160
### 2. SpoonReactMCP
@@ -216,7 +216,7 @@ class WeatherAgent(SpoonReactAI):
216216

217217
# Set up tools
218218
weather_tool = WeatherTool()
219-
self.avaliable_tools = ToolManager([weather_tool])
219+
self.available_tools = ToolManager([weather_tool])
220220
```
221221

222222
### Advanced Agent with Multiple Tools
@@ -252,7 +252,7 @@ class ResearchAgent(SpoonReactAI):
252252
)
253253
tools.append(search_tool)
254254

255-
self.avaliable_tools = ToolManager(tools)
255+
self.available_tools = ToolManager(tools)
256256
self.system_prompt = """
257257
You are a research assistant with access to web search tools.
258258
@@ -367,8 +367,8 @@ agent.max_steps = 20
367367
agent.system_prompt = "You are an expert assistant."
368368

369369
# Check available tools
370-
if hasattr(agent, 'avaliable_tools'):
371-
tools = agent.avaliable_tools.list_tools()
370+
if hasattr(agent, 'available_tools'):
371+
tools = agent.available_tools.list_tools()
372372
print(f"Available tools: {tools}")
373373
```
374374

@@ -407,7 +407,7 @@ class DataAnalysisAgent(ToolCallAgent):
407407
DatabaseTool(), # Data access
408408
]
409409

410-
self.avaliable_tools = ToolManager(tools)
410+
self.available_tools = ToolManager(tools)
411411
```
412412

413413
### 3. Error Handling
@@ -550,7 +550,7 @@ class MCPEnabledAgent(SpoonReactMCP):
550550
)
551551

552552
# Create tool manager
553-
self.avaliable_tools = ToolManager([search_tool, context7_tool])
553+
self.available_tools = ToolManager([search_tool, context7_tool])
554554

555555
self.system_prompt = """
556556
You are a research assistant with access to multiple MCP tools:
@@ -669,7 +669,7 @@ class ComprehensiveMCPAgent(SpoonReactMCP):
669669
))
670670

671671
# Create tool manager
672-
self.avaliable_tools = ToolManager(mcp_tools)
672+
self.available_tools = ToolManager(mcp_tools)
673673

674674
self.system_prompt = """
675675
You are a comprehensive AI assistant with the following MCP tools:

docs/core-concepts/graph-system.md

Lines changed: 86 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -406,6 +406,90 @@ async def run_crypto_analysis(query: str) -> Dict[str, Any]:
406406

407407
---
408408

409+
## Memory System Integration
410+
411+
The graph runtime builds on the SpoonOS Memory System to persist context, metadata, and execution state across runs. Every compiled graph can attach a `Memory` store so routers, reducers, and agents reason over accumulated history without bespoke plumbing.
412+
413+
### Overview
414+
415+
- Persistent JSON-backed storage keyed by `session_id`
416+
- Chronological message history with metadata enrichment
417+
- Query helpers for search and time-based filtering
418+
- Automatic wiring inside `GraphAgent` and high-level APIs
419+
420+
### Core Components
421+
422+
```python
423+
from spoon_ai.graph.agent import Memory
424+
425+
# Use default storage path (~/.spoon_ai/memory)
426+
default_memory = Memory()
427+
428+
# Customize location and session isolation
429+
scoped_memory = Memory(storage_path="./custom_memory", session_id="my_session")
430+
```
431+
432+
- **Persistent storage** keeps transcripts and state checkpoints on disk
433+
- **Session management** separates contexts per agent or user
434+
- **Metadata fields** let reducers store structured state
435+
- **Search helpers** (`search_messages`, `get_recent_messages`) surface relevant history
436+
437+
### Basic Usage Patterns
438+
439+
```python
440+
message = {"role": "user", "content": "Hello, how can I help?"}
441+
scoped_memory.add_message(message)
442+
443+
all_messages = scoped_memory.get_messages()
444+
recent = scoped_memory.get_recent_messages(hours=24)
445+
metadata = scoped_memory.get_metadata("last_topic")
446+
```
447+
448+
Use metadata to thread routing hints and conversation topics, and prune history with retention policies or manual cleanup (`memory.clear()`).
449+
450+
### Graph Workflow Integration
451+
452+
`GraphAgent` wires memory automatically and exposes statistics for monitoring:
453+
454+
```python
455+
from spoon_ai.graph import GraphAgent, StateGraph
456+
457+
agent = GraphAgent(
458+
name="crypto_analyzer",
459+
graph=my_graph,
460+
memory_path="./agent_memory",
461+
session_id="crypto_session"
462+
)
463+
464+
result = await agent.run("Analyze BTC trends")
465+
stats = agent.get_memory_statistics()
466+
print(stats["total_messages"])
467+
```
468+
469+
Switch between sessions to isolate experiments (`agent.load_session("research_session")`) or inject custom `Memory` subclasses for domain-specific validation.
470+
471+
### Advanced Patterns
472+
473+
- Call `memory.get_statistics()` to monitor file size, last update time, and record counts
474+
- Implement custom subclasses to enforce schemas or add enrichment hooks
475+
- Use time-window retrieval for reducers that need the most recent facts only
476+
- Build automated cleanup jobs for oversized stores (>10MB) to keep execution tight
477+
478+
### Troubleshooting
479+
480+
```python
481+
import json
482+
try:
483+
with open(scoped_memory.session_file, "r") as fh:
484+
json.load(fh)
485+
except json.JSONDecodeError:
486+
scoped_memory.clear() # Reset corrupted memory files
487+
```
488+
489+
Conflicts typically trace back to duplicated session IDs—compose unique identifiers with timestamps or agent names to avoid contention.
490+
491+
---
492+
409493
## Best Practices
410494

411495
- **Use declarative templates**: `GraphTemplate` + `NodeSpec` for maintainable workflows
@@ -460,7 +544,8 @@ async def run_crypto_analysis(query: str) -> Dict[str, Any]:
460544
- **[MCP Protocol](../core-concepts/mcp-protocol.md)** - Explore dynamic tool discovery and execution
461545

462546
### 📖 **Additional Resources**
463-
547+
- **[State Management](../api-reference/graph/state-graph.md)** - Reducer configuration guide
548+
- **[Agents Detailed](./agents-detailed.md)** - Long-lived agent design patterns
464549
- **[Graph Builder API](../api-reference/graph/)** - Complete declarative API documentation
465550
- **[Performance Optimization](../troubleshooting/performance.md)** - Graph performance tuning guides
466551
- **[Troubleshooting](../troubleshooting/common-issues.md)** - Common issues and solutions

0 commit comments

Comments
 (0)