Skip to content

Commit f72b809

Browse files
authored
Merge pull request #188 from veithly/fix/react-tool
fix react tool
2 parents 803863c + 37242a1 commit f72b809

File tree

7 files changed

+90
-33
lines changed

7 files changed

+90
-33
lines changed

.env.example

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ OPENAI_API_KEY=sk-your-openai-api-key-here
66
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key-here
77
DEEPSEEK_API_KEY=your-deepseek-api-key-here
88
GEMINI_API_KEY=your-gemini-api-key-here
9-
BASE_URL=your_base_url_here
9+
# BASE_URL= # Optional: override provider endpoint when using custom gateways
1010

1111
# ======= Blockchain Configuration (only for crypto operations) =======
1212
# Wallet private key (keep this secure!)

README.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ SpoonOS is a living, evolving agentic operating system. Its SCDF is purpose-buil
5656

5757
### Prerequisites
5858

59-
- Python 3.11+
59+
- Python 3.12+
6060
- pip package manager (or uv as a faster alternative)
6161

6262
```bash
@@ -76,6 +76,8 @@ Prefer faster install? See docs/installation.md for uv-based setup.
7676

7777
## 🔐 Configuration Setup
7878

79+
> **Note (Nov 2025):** When you import `spoon_ai` directly in Python, configuration is read from environment variables (including `.env`). The interactive CLI / `spoon-cli` tooling is what reads `config.json` and exports those values into the environment for you.
80+
7981
SpoonOS uses a unified configuration system that supports multiple setup methods. Choose the one that works best for your workflow:
8082

8183
### Method 1: Environment Variables (.env file) - Recommended
@@ -185,9 +187,9 @@ python main.py
185187
> config
186188
```
187189

188-
### Method 3: Direct config.json
190+
### Method 3: CLI `config.json` (optional)
189191

190-
Create or edit `config.json` directly for advanced configurations:
192+
For CLI workflows (including `python main.py` and `spoon-cli`), you can create or edit a `config.json` file that the CLI layer reads and then exports into environment variables. Core Python code still uses environment variables only.
191193

192194
```json
193195
{
@@ -231,10 +233,10 @@ Create or edit `config.json` directly for advanced configurations:
231233

232234
### Configuration Priority
233235

234-
SpoonOS uses a hybrid configuration system:
236+
SpoonOS uses a split configuration model:
235237

236-
1. **`config.json`** (Highest Priority) - Runtime configuration, can be modified via CLI
237-
2. **`.env` file** (Fallback) - Initial setup, used to generate `config.json` if it doesn't exist
238+
- **Core SDK (Python imports of `spoon_ai`)**: reads only environment variables (including `.env`).
239+
- **CLI layer (main.py / spoon-cli)**: reads `config.json`, then materializes values into environment variables before invoking the SDK.
238240

239241
### Tool Configuration
240242

@@ -353,7 +355,7 @@ See `examples/turnkey/` for complete usage examples.
353355

354356
### Provider Configuration
355357

356-
Configure providers in your `config.json`:
358+
In CLI workflows you can configure providers in the CLI `config.json` (the CLI will export these values into environment variables before invoking the SDK). For pure SDK usage, set the corresponding environment variables instead of relying on `config.json`:
357359

358360
```json
359361
{

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ authors = [
1111
description = "SDK for SpoonAI tools and agents" # A brief description
1212
readme = "README.md" # If you have a README file
1313
# packages = ["spoon_ai"] # REMOVED: Invalid field here
14-
requires-python = ">=3.11" # Specify supported Python version
14+
requires-python = ">=3.12" # Specify supported Python version
1515
classifiers = [
1616
"Programming Language :: Python :: 3",
1717
"License :: OSI Approved :: MIT License", # Choose an appropriate license

spoon_ai/agents/spoon_react.py

Lines changed: 59 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@
44
SSETransport, WSTransport, NpxStdioTransport,
55
FastMCPStdioTransport, UvxStdioTransport, StdioTransport)
66
from fastmcp.client import Client as MCPClient
7-
from pydantic import Field
7+
from pydantic import Field, AliasChoices, model_validator
88
import logging
99

1010
from spoon_ai.chat import ChatBot
11-
from spoon_ai.prompts.spoon_react import NEXT_STEP_PROMPT, SYSTEM_PROMPT
11+
from spoon_ai.prompts.spoon_react import NEXT_STEP_PROMPT_TEMPLATE, SYSTEM_PROMPT
1212
from spoon_ai.tools import ToolManager
1313

1414

@@ -39,13 +39,16 @@ class SpoonReactAI(ToolCallAgent):
3939
name: str = "spoon_react"
4040
description: str = "A smart ai agent in neo blockchain"
4141

42-
system_prompt: str = SYSTEM_PROMPT
43-
next_step_prompt: str = NEXT_STEP_PROMPT
42+
system_prompt: Optional[str] = None
43+
next_step_prompt: Optional[str] = None
4444

4545
max_steps: int = 10
46-
tool_choice: str = "auto"
46+
tool_choice: str = "required"
4747

48-
available_tools: ToolManager = Field(default_factory=lambda: ToolManager([]))
48+
available_tools: ToolManager = Field(
49+
default_factory=lambda: ToolManager([]),
50+
validation_alias=AliasChoices("available_tools", "avaliable_tools", "tools"),
51+
)
4952
llm: ChatBot = Field(default_factory=create_configured_chatbot)
5053

5154
mcp_transport: Union[str, WSTransport, SSETransport, PythonStdioTransport, NpxStdioTransport, FastMCPTransport, FastMCPStdioTransport, UvxStdioTransport, StdioTransport] = Field(default="mcp_server")
@@ -56,7 +59,52 @@ def __init__(self, **kwargs):
5659
"""Initialize SpoonReactAI with both ToolCallAgent and MCPClientMixin initialization"""
5760
# Call parent class initializers
5861
ToolCallAgent.__init__(self, **kwargs)
62+
# Normalize available_tools input (list -> ToolManager)
63+
if isinstance(getattr(self, "available_tools", None), list):
64+
self.available_tools = ToolManager(self.available_tools)
65+
if self.available_tools is None:
66+
self.available_tools = ToolManager([])
5967
self._x402_tools_initialized = False
68+
self._refresh_prompts()
69+
70+
@model_validator(mode="before")
71+
@classmethod
72+
def _coerce_tools(cls, values: Dict[str, Any]) -> Dict[str, Any]:
73+
"""Allow passing `tools` or `available_tools` as a list; wrap into ToolManager."""
74+
tools_input = values.get("tools", None)
75+
avail_input = values.get("available_tools", None) or values.get("avaliable_tools", None)
76+
77+
def wrap(val):
78+
if isinstance(val, ToolManager):
79+
return val
80+
if isinstance(val, list):
81+
return ToolManager(val)
82+
return val
83+
84+
if tools_input is not None:
85+
values["available_tools"] = wrap(tools_input)
86+
elif avail_input is not None:
87+
values["available_tools"] = wrap(avail_input)
88+
89+
return values
90+
91+
def _build_tool_list(self) -> str:
92+
"""Return bullet list of available tools names and descriptions."""
93+
if not getattr(self, "available_tools", None) or not getattr(self.available_tools, "tool_map", None):
94+
return "- (no tools loaded)"
95+
lines = []
96+
for tool in self.available_tools.tool_map.values():
97+
desc = getattr(tool, "description", "") or ""
98+
lines.append(f"- {getattr(tool, 'name', 'unknown')}: {desc}")
99+
return "\n".join(lines)
100+
101+
def _refresh_prompts(self) -> None:
102+
"""Refresh system and next-step prompts dynamically from current tools."""
103+
tool_list = self._build_tool_list()
104+
self.system_prompt = f"{SYSTEM_PROMPT}\n\nAvailable tools:\n{tool_list}"
105+
self.next_step_prompt = NEXT_STEP_PROMPT_TEMPLATE.format(
106+
tool_list=tool_list,
107+
)
60108

61109
async def initialize(self, __context: Any = None):
62110
"""Initialize async components and subscribe to topics"""
@@ -109,3 +157,8 @@ async def _ensure_x402_tools(self) -> None:
109157
self.avaliable_tools.add_tool(X402PaywalledRequestTool(service=service))
110158

111159
self._x402_tools_initialized = True
160+
161+
async def run(self, request: Optional[str] = None) -> str:
162+
"""Ensure prompts reflect current tools before running."""
163+
self._refresh_prompts()
164+
return await super().run(request)

spoon_ai/chat.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -539,6 +539,7 @@ async def ask_tool(self, messages: List[Union[dict, Message]], system_msg: Optio
539539
messages=processed_messages,
540540
tools=tools or [],
541541
provider=self.llm_provider,
542+
tool_choice=tool_choice,
542543
**kwargs
543544
)
544545

spoon_ai/llm/providers/openai_compatible_provider.py

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -258,13 +258,18 @@ async def chat(self, messages: List[Message], **kwargs) -> LLMResponse:
258258
max_tokens = kwargs.get('max_tokens', self.max_tokens)
259259
temperature = kwargs.get('temperature', self.temperature)
260260

261+
tools = kwargs.get('tools')
262+
tool_choice = kwargs.get('tool_choice', 'auto')
263+
261264
response = await self.client.chat.completions.create(
262265
model=model,
263266
messages=openai_messages,
264267
max_tokens=max_tokens,
265268
temperature=temperature,
269+
tools=tools,
270+
tool_choice=tool_choice,
266271
stream=False,
267-
**{k: v for k, v in kwargs.items() if k not in ['model', 'max_tokens', 'temperature']}
272+
**{k: v for k, v in kwargs.items() if k not in ['model', 'max_tokens', 'temperature', 'tools', 'tool_choice']}
268273
)
269274

270275
duration = asyncio.get_event_loop().time() - start_time
@@ -296,15 +301,20 @@ async def chat_stream(self,messages: List[Message],callbacks: Optional[List[Base
296301
# Trigger on_llm_start callback
297302
await callback_manager.on_llm_start(run_id=run_id,messages=messages,model=model,provider=self.get_provider_name())
298303

304+
tools = kwargs.get('tools')
305+
tool_choice = kwargs.get('tool_choice', 'auto')
306+
299307
stream = await self.client.chat.completions.create(
300308
model=model,
301309
messages=openai_messages,
302310
max_tokens=max_tokens,
303311
temperature=temperature,
312+
tools=tools,
313+
tool_choice=tool_choice,
304314
stream=True,
305315
stream_options={"include_usage": True}, # Request usage stats
306316
**{k: v for k, v in kwargs.items()
307-
if k not in ['model', 'max_tokens', 'temperature', 'callbacks']}
317+
if k not in ['model', 'max_tokens', 'temperature', 'callbacks', 'tools', 'tool_choice']}
308318
)
309319
# Process streaming response
310320
full_content = ""
@@ -557,4 +567,4 @@ async def _handle_error(self, error: Exception) -> None:
557567
elif "timeout" in error_str or "connection" in error_str:
558568
raise NetworkError(provider_name, "Network error", original_error=error)
559569
else:
560-
raise ProviderError(provider_name, f"Request failed: {str(error)}", original_error=error)
570+
raise ProviderError(provider_name, f"Request failed: {str(error)}", original_error=error)

spoon_ai/prompts/spoon_react.py

Lines changed: 6 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,11 @@
11
SYSTEM_PROMPT = "You are Spoon AI, an all-capable AI agent in Neo blockchain. aimed at solving any task presented by the user. You have various tools at your disposal that you can call upon to efficiently complete complex requests. Whether it's programming, information retrieval, file processing, or web browsing, you can handle it all."
22

3-
NEXT_STEP_PROMPT = """You can interact with the Neo blockchain using the following tools to obtain and analyze blockchain data:
3+
NEXT_STEP_PROMPT_TEMPLATE = """You can interact with the Neo blockchain and broader crypto markets using the available tools below:
4+
{tool_list}
45
5-
PredictPrice: Predict token price trends, analyze market movements, and help users make more informed investment decisions.
6+
Pick tools by matching the user's request to the tool names/description keywords (e.g., price/quote/market data → tools mentioning price or market; holders/distribution → holder tools; liquidity/pool → liquidity tools; history/ohlcv/trend → history/indicator tools). If multiple tools fit, pick the smallest set that answers the question. Ask briefly for missing required parameters before calling.
67
7-
TokenHolders: Query information about holders of specific tokens, understand token distribution and major holders.
8+
If any tool can reasonably answer the request, you MUST call at least one tool before giving a final answer. Only skip tool calls when no tool is relevant.
89
9-
TradingHistory: Retrieve trading history records of tokens, analyze trading patterns and market activities.
10-
11-
UniswapLiquidity: Check liquidity pool information on Uniswap, understand token liquidity status and trading depth.
12-
13-
WalletAnalysis: Analyze wallet address activities and holdings, understand user trading behaviors and asset distribution.
14-
15-
Based on user needs, proactively select the most appropriate tool or combination of tools. For complex tasks, you can break down the problem and use different tools step by step to solve it. After using each tool, clearly explain the execution results and suggest the next steps.
16-
17-
Always maintain a helpful, informative tone throughout the interaction. If you encounter any limitations or need more details, clearly communicate this to the user.
18-
19-
Important: Each time you call a tool, you must provide clear content explaining why you are making this call and how it contributes to solving the user's request.
20-
"""
10+
For complex tasks, break the work into steps and summarize after each tool call. Each time you call a tool, explain why it helps and how it answers the request.
11+
"""

0 commit comments

Comments
 (0)