Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ OPENAI_API_KEY=sk-your-openai-api-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key-here
DEEPSEEK_API_KEY=your-deepseek-api-key-here
GEMINI_API_KEY=your-gemini-api-key-here
BASE_URL=your_base_url_here
# BASE_URL= # Optional: override provider endpoint when using custom gateways

# ======= Blockchain Configuration (only for crypto operations) =======
# Wallet private key (keep this secure!)
Expand Down
16 changes: 9 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ SpoonOS is a living, evolving agentic operating system. Its SCDF is purpose-buil

### Prerequisites

- Python 3.11+
- Python 3.12+
- pip package manager (or uv as a faster alternative)

```bash
Expand All @@ -76,6 +76,8 @@ Prefer faster install? See docs/installation.md for uv-based setup.

## 🔐 Configuration Setup

> **Note (Nov 2025):** When you import `spoon_ai` directly in Python, configuration is read from environment variables (including `.env`). The interactive CLI / `spoon-cli` tooling is what reads `config.json` and exports those values into the environment for you.

SpoonOS uses a unified configuration system that supports multiple setup methods. Choose the one that works best for your workflow:

### Method 1: Environment Variables (.env file) - Recommended
Expand Down Expand Up @@ -185,9 +187,9 @@ python main.py
> config
```

### Method 3: Direct config.json
### Method 3: CLI `config.json` (optional)

Create or edit `config.json` directly for advanced configurations:
For CLI workflows (including `python main.py` and `spoon-cli`), you can create or edit a `config.json` file that the CLI layer reads and then exports into environment variables. Core Python code still uses environment variables only.

```json
{
Expand Down Expand Up @@ -231,10 +233,10 @@ Create or edit `config.json` directly for advanced configurations:

### Configuration Priority

SpoonOS uses a hybrid configuration system:
SpoonOS uses a split configuration model:

1. **`config.json`** (Highest Priority) - Runtime configuration, can be modified via CLI
2. **`.env` file** (Fallback) - Initial setup, used to generate `config.json` if it doesn't exist
- **Core SDK (Python imports of `spoon_ai`)**: reads only environment variables (including `.env`).
- **CLI layer (main.py / spoon-cli)**: reads `config.json`, then materializes values into environment variables before invoking the SDK.

### Tool Configuration

Expand Down Expand Up @@ -353,7 +355,7 @@ See `examples/turnkey/` for complete usage examples.

### Provider Configuration

Configure providers in your `config.json`:
In CLI workflows you can configure providers in the CLI `config.json` (the CLI will export these values into environment variables before invoking the SDK). For pure SDK usage, set the corresponding environment variables instead of relying on `config.json`:

```json
{
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ authors = [
description = "SDK for SpoonAI tools and agents" # A brief description
readme = "README.md" # If you have a README file
# packages = ["spoon_ai"] # REMOVED: Invalid field here
requires-python = ">=3.11" # Specify supported Python version
requires-python = ">=3.12" # Specify supported Python version
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License", # Choose an appropriate license
Expand Down
65 changes: 59 additions & 6 deletions spoon_ai/agents/spoon_react.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@
SSETransport, WSTransport, NpxStdioTransport,
FastMCPStdioTransport, UvxStdioTransport, StdioTransport)
from fastmcp.client import Client as MCPClient
from pydantic import Field
from pydantic import Field, AliasChoices, model_validator
import logging

from spoon_ai.chat import ChatBot
from spoon_ai.prompts.spoon_react import NEXT_STEP_PROMPT, SYSTEM_PROMPT
from spoon_ai.prompts.spoon_react import NEXT_STEP_PROMPT_TEMPLATE, SYSTEM_PROMPT
from spoon_ai.tools import ToolManager


Expand Down Expand Up @@ -39,13 +39,16 @@ class SpoonReactAI(ToolCallAgent):
name: str = "spoon_react"
description: str = "A smart ai agent in neo blockchain"

system_prompt: str = SYSTEM_PROMPT
next_step_prompt: str = NEXT_STEP_PROMPT
system_prompt: Optional[str] = None
next_step_prompt: Optional[str] = None

max_steps: int = 10
tool_choice: str = "auto"
tool_choice: str = "required"

available_tools: ToolManager = Field(default_factory=lambda: ToolManager([]))
available_tools: ToolManager = Field(
default_factory=lambda: ToolManager([]),
validation_alias=AliasChoices("available_tools", "avaliable_tools", "tools"),
)
llm: ChatBot = Field(default_factory=create_configured_chatbot)

mcp_transport: Union[str, WSTransport, SSETransport, PythonStdioTransport, NpxStdioTransport, FastMCPTransport, FastMCPStdioTransport, UvxStdioTransport, StdioTransport] = Field(default="mcp_server")
Expand All @@ -56,7 +59,52 @@ def __init__(self, **kwargs):
"""Initialize SpoonReactAI with both ToolCallAgent and MCPClientMixin initialization"""
# Call parent class initializers
ToolCallAgent.__init__(self, **kwargs)
# Normalize available_tools input (list -> ToolManager)
if isinstance(getattr(self, "available_tools", None), list):
self.available_tools = ToolManager(self.available_tools)
if self.available_tools is None:
self.available_tools = ToolManager([])
self._x402_tools_initialized = False
self._refresh_prompts()

@model_validator(mode="before")
@classmethod
def _coerce_tools(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Allow passing `tools` or `available_tools` as a list; wrap into ToolManager."""
tools_input = values.get("tools", None)
avail_input = values.get("available_tools", None) or values.get("avaliable_tools", None)

def wrap(val):
if isinstance(val, ToolManager):
return val
if isinstance(val, list):
return ToolManager(val)
return val

if tools_input is not None:
values["available_tools"] = wrap(tools_input)
elif avail_input is not None:
values["available_tools"] = wrap(avail_input)

return values

def _build_tool_list(self) -> str:
"""Return bullet list of available tools names and descriptions."""
if not getattr(self, "available_tools", None) or not getattr(self.available_tools, "tool_map", None):
return "- (no tools loaded)"
lines = []
for tool in self.available_tools.tool_map.values():
desc = getattr(tool, "description", "") or ""
lines.append(f"- {getattr(tool, 'name', 'unknown')}: {desc}")
return "\n".join(lines)

def _refresh_prompts(self) -> None:
"""Refresh system and next-step prompts dynamically from current tools."""
tool_list = self._build_tool_list()
self.system_prompt = f"{SYSTEM_PROMPT}\n\nAvailable tools:\n{tool_list}"
self.next_step_prompt = NEXT_STEP_PROMPT_TEMPLATE.format(
tool_list=tool_list,
)

async def initialize(self, __context: Any = None):
"""Initialize async components and subscribe to topics"""
Expand Down Expand Up @@ -109,3 +157,8 @@ async def _ensure_x402_tools(self) -> None:
self.avaliable_tools.add_tool(X402PaywalledRequestTool(service=service))

self._x402_tools_initialized = True

async def run(self, request: Optional[str] = None) -> str:
"""Ensure prompts reflect current tools before running."""
self._refresh_prompts()
return await super().run(request)
1 change: 1 addition & 0 deletions spoon_ai/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -539,6 +539,7 @@ async def ask_tool(self, messages: List[Union[dict, Message]], system_msg: Optio
messages=processed_messages,
tools=tools or [],
provider=self.llm_provider,
tool_choice=tool_choice,
**kwargs
)

Expand Down
16 changes: 13 additions & 3 deletions spoon_ai/llm/providers/openai_compatible_provider.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,13 +258,18 @@ async def chat(self, messages: List[Message], **kwargs) -> LLMResponse:
max_tokens = kwargs.get('max_tokens', self.max_tokens)
temperature = kwargs.get('temperature', self.temperature)

tools = kwargs.get('tools')
tool_choice = kwargs.get('tool_choice', 'auto')

response = await self.client.chat.completions.create(
model=model,
messages=openai_messages,
max_tokens=max_tokens,
temperature=temperature,
tools=tools,
tool_choice=tool_choice,
stream=False,
**{k: v for k, v in kwargs.items() if k not in ['model', 'max_tokens', 'temperature']}
**{k: v for k, v in kwargs.items() if k not in ['model', 'max_tokens', 'temperature', 'tools', 'tool_choice']}
)

duration = asyncio.get_event_loop().time() - start_time
Expand Down Expand Up @@ -296,15 +301,20 @@ async def chat_stream(self,messages: List[Message],callbacks: Optional[List[Base
# Trigger on_llm_start callback
await callback_manager.on_llm_start(run_id=run_id,messages=messages,model=model,provider=self.get_provider_name())

tools = kwargs.get('tools')
tool_choice = kwargs.get('tool_choice', 'auto')

stream = await self.client.chat.completions.create(
model=model,
messages=openai_messages,
max_tokens=max_tokens,
temperature=temperature,
tools=tools,
tool_choice=tool_choice,
stream=True,
stream_options={"include_usage": True}, # Request usage stats
**{k: v for k, v in kwargs.items()
if k not in ['model', 'max_tokens', 'temperature', 'callbacks']}
if k not in ['model', 'max_tokens', 'temperature', 'callbacks', 'tools', 'tool_choice']}
)
# Process streaming response
full_content = ""
Expand Down Expand Up @@ -528,4 +538,4 @@ async def _handle_error(self, error: Exception) -> None:
elif "timeout" in error_str or "connection" in error_str:
raise NetworkError(provider_name, "Network error", original_error=error)
else:
raise ProviderError(provider_name, f"Request failed: {str(error)}", original_error=error)
raise ProviderError(provider_name, f"Request failed: {str(error)}", original_error=error)
21 changes: 6 additions & 15 deletions spoon_ai/prompts/spoon_react.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,11 @@
SYSTEM_PROMPT = "You are Spoon AI, an all-capable AI agent in Neo blockchain. aimed at solving any task presented by the user. You have various tools at your disposal that you can call upon to efficiently complete complex requests. Whether it's programming, information retrieval, file processing, or web browsing, you can handle it all."

NEXT_STEP_PROMPT = """You can interact with the Neo blockchain using the following tools to obtain and analyze blockchain data:
NEXT_STEP_PROMPT_TEMPLATE = """You can interact with the Neo blockchain and broader crypto markets using the available tools below:
{tool_list}

PredictPrice: Predict token price trends, analyze market movements, and help users make more informed investment decisions.
Pick tools by matching the user's request to the tool names/description keywords (e.g., price/quote/market data → tools mentioning price or market; holders/distribution → holder tools; liquidity/pool → liquidity tools; history/ohlcv/trend → history/indicator tools). If multiple tools fit, pick the smallest set that answers the question. Ask briefly for missing required parameters before calling.

TokenHolders: Query information about holders of specific tokens, understand token distribution and major holders.
If any tool can reasonably answer the request, you MUST call at least one tool before giving a final answer. Only skip tool calls when no tool is relevant.

TradingHistory: Retrieve trading history records of tokens, analyze trading patterns and market activities.

UniswapLiquidity: Check liquidity pool information on Uniswap, understand token liquidity status and trading depth.

WalletAnalysis: Analyze wallet address activities and holdings, understand user trading behaviors and asset distribution.

Based on user needs, proactively select the most appropriate tool or combination of tools. For complex tasks, you can break down the problem and use different tools step by step to solve it. After using each tool, clearly explain the execution results and suggest the next steps.

Always maintain a helpful, informative tone throughout the interaction. If you encounter any limitations or need more details, clearly communicate this to the user.

Important: Each time you call a tool, you must provide clear content explaining why you are making this call and how it contributes to solving the user's request.
"""
For complex tasks, break the work into steps and summarize after each tool call. Each time you call a tool, explain why it helps and how it answers the request.
"""