diff --git a/README.md b/README.md index 56feb847..9fdf91f0 100644 --- a/README.md +++ b/README.md @@ -86,6 +86,8 @@ cp examples/sgr_deep_research/config.yaml.example examples/sgr_deep_research/con ```bash sgr --config-file examples/sgr_deep_research/config.yaml +# or use short option +sgr -c examples/sgr_deep_research/config.yaml ``` > **Note:** You can also run the server directly with Python: @@ -94,6 +96,32 @@ sgr --config-file examples/sgr_deep_research/config.yaml > python -m sgr_agent_core.server --config-file examples/sgr_deep_research/config.yaml > ``` +### Using the CLI Tool (`sgrsh`) + +For interactive command-line usage, you can use the `sgrsh` utility: + +```bash +# Single query mode +sgrsh "Найди цену биткоина" + +# With agent selection (e.g. sgr_agent, dialog_agent) +sgrsh --agent sgr_agent "What is AI?" + +# With custom config file +sgrsh -c config.yaml -a sgr_agent "Your query" + +# Interactive chat mode (no query argument) +sgrsh +sgrsh -a sgr_agent +``` + +The `sgrsh` command: + +- Automatically looks for `config.yaml` in the current directory +- Supports interactive chat mode for multiple queries +- Handles clarification and dialog (intermediate results) requests from agents +- Works with any agent defined in your configuration (e.g. `sgr_agent`, `dialog_agent`) + For more examples and detailed usage instructions, see the [examples/](examples/) directory. ## Benchmarking diff --git a/docs/en/getting-started/index.md b/docs/en/getting-started/index.md index 0f7fb085..413a7cc3 100644 --- a/docs/en/getting-started/index.md +++ b/docs/en/getting-started/index.md @@ -57,10 +57,61 @@ pip install sgr-agent-core See the [Installation Guide](installation.md) for detailed instructions and the [Using as Library](../framework/first-steps.md) guide to get started. -### Next Steps +### CLI Tool (`sgrsh`) -- **[Using as Library](../framework/first-steps.md)** — Learn how to use SGR Agent Core as a Python library -- **[API Server Quick Start](../sgr-api/SGR-Quick-Start.md)** — Get started with the REST API service +After installation, you can use the `sgrsh` command-line tool for interactive agent usage: + +```bash +# Single query mode +sgrsh "Find the current Bitcoin price" + +# With agent selection +sgrsh --agent sgr_agent "What is AI?" + +# With custom config file +sgrsh -c config.yaml -a sgr_agent "Your query" + +# Interactive chat mode (no query argument) +sgrsh +sgrsh -a sgr_agent +``` + +The `sgrsh` command: +- Automatically looks for `config.yaml` in the current directory +- Supports interactive chat mode for multiple queries +- Handles clarification requests from agents interactively +- Works with any agent defined in your configuration + +### Using as Library + +```python +import asyncio +from sgr_agent_core import AgentDefinition, AgentFactory +from sgr_agent_core.agents import SGRToolCallingAgent +import sgr_agent_core.tools as tools + +async def main(): + agent_def = AgentDefinition( + name="my_agent", + base_class=SGRToolCallingAgent, + tools=[tools.GeneratePlanTool, tools.FinalAnswerTool], + llm={ + "api_key": "your-api-key", + "base_url": "https://api.openai.com/v1", + }, + ) + + agent = await AgentFactory.create( + agent_def=agent_def, + task_messages=[{"role": "user", "content": "Research AI trends"}], + ) + + result = await agent.execute() + print(result) + +if __name__ == "__main__": + asyncio.run(main()) +``` ## Documentation diff --git a/docs/en/getting-started/installation.md b/docs/en/getting-started/installation.md index ef316152..628bb41f 100644 --- a/docs/en/getting-started/installation.md +++ b/docs/en/getting-started/installation.md @@ -40,10 +40,17 @@ After installation, verify that the package is correctly installed: python -c "import sgr_agent_core; print(sgr_agent_core.__version__)" ``` -You should also be able to use the `sgr` command-line utility: +You should also be able to use the command-line utilities: ```bash +# API server command sgr --help +# or with short option +sgr -c config.yaml + +# Interactive CLI command +sgrsh --help +sgrsh "Your query here" ``` ## Installation via Docker diff --git a/docs/en/sgr-api/SGR-Summary-table.md b/docs/en/sgr-api/SGR-Summary-table.md index 2cc4db28..82617ad5 100644 --- a/docs/en/sgr-api/SGR-Summary-table.md +++ b/docs/en/sgr-api/SGR-Summary-table.md @@ -5,3 +5,4 @@ This table compares the available agent types in SGR Agent Core, showing their i | [SGRAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/sgr_agent.py) | `sgr_agent` | Structured Output | ❌ Built into schema | 6 basic | 1 | SO Union Type | | [ToolCallingAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/tool_calling_agent.py) | `tool_calling_agent` | ❌ Absent | ❌ Absent | 6 basic | 1 | FC "required" | | [SGRToolCallingAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/sgr_tool_calling_agent.py) | `sgr_tool_calling_agent` | FC Tool enforced | ✅ First step FC | 7 (6 + ReasoningTool) | 2 | FC → FC TOP AGENT | +| [DialogAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/dialog_agent.py) | `dialog_agent` | Same as SGRToolCallingAgent | ✅ First step FC | 8 (+ AnswerTool) | 2 | FC → FC Long dialogs | diff --git a/docs/ru/getting-started/index.md b/docs/ru/getting-started/index.md index fe7eeea5..7c94167e 100644 --- a/docs/ru/getting-started/index.md +++ b/docs/ru/getting-started/index.md @@ -57,10 +57,61 @@ pip install sgr-agent-core См. [Руководство по установке](installation.md) для подробных инструкций и [Использование как библиотека](../framework/first-steps.md) для начала работы. -### Следующие шаги +### CLI утилита (`sgrsh`) -- **[Использование как библиотека](../framework/first-steps.md)** — Узнайте, как использовать SGR Agent Core как Python библиотеку -- **[Быстрый старт API сервера](../sgr-api/SGR-Quick-Start.md)** — Начните работу с REST API сервисом +После установки вы можете использовать утилиту командной строки `sgrsh` для интерактивной работы с агентами: + +```bash +# Режим одного запроса +sgrsh "Найди текущую цену биткоина" + +# С выбором агента +sgrsh --agent sgr_agent "Что такое AI?" + +# С указанием файла конфигурации +sgrsh -c config.yaml -a sgr_agent "Ваш запрос" + +# Интерактивный режим чата (без аргумента запроса) +sgrsh +sgrsh -a sgr_agent +``` + +Команда `sgrsh`: +- Автоматически ищет `config.yaml` в текущей директории +- Поддерживает интерактивный режим чата для множественных запросов +- Обрабатывает запросы на уточнение от агентов интерактивно +- Работает с любым агентом, определённым в вашей конфигурации + +### Использование как библиотека + +```python +import asyncio +from sgr_agent_core import AgentDefinition, AgentFactory +from sgr_agent_core.agents import SGRToolCallingAgent +import sgr_agent_core.tools as tools + +async def main(): + agent_def = AgentDefinition( + name="my_agent", + base_class=SGRToolCallingAgent, + tools=[tools.GeneratePlanTool, tools.FinalAnswerTool], + llm={ + "api_key": "your-api-key", + "base_url": "https://api.openai.com/v1", + }, + ) + + agent = await AgentFactory.create( + agent_def=agent_def, + task_messages=[{"role": "user", "content": "Исследуй тренды в AI"}], + ) + + result = await agent.execute() + print(result) + +if __name__ == "__main__": + asyncio.run(main()) +``` ## Документация diff --git a/docs/ru/getting-started/installation.md b/docs/ru/getting-started/installation.md index f8d7e9a8..628aa34d 100644 --- a/docs/ru/getting-started/installation.md +++ b/docs/ru/getting-started/installation.md @@ -40,10 +40,17 @@ pip install sgr-agent-core[docs] python -c "import sgr_agent_core; print(sgr_agent_core.__version__)" ``` -Также вы должны иметь возможность использовать утилиту командной строки `sgr`: +Также вы должны иметь возможность использовать утилиты командной строки: ```bash +# Команда API сервера sgr --help +# или с коротким параметром +sgr -c config.yaml + +# Интерактивная CLI команда +sgrsh --help +sgrsh "Ваш запрос здесь" ``` ## Установка через Docker diff --git a/docs/ru/sgr-api/SGR-Summary-table.md b/docs/ru/sgr-api/SGR-Summary-table.md index 68916378..35dacc35 100644 --- a/docs/ru/sgr-api/SGR-Summary-table.md +++ b/docs/ru/sgr-api/SGR-Summary-table.md @@ -5,3 +5,4 @@ | [SGRAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/sgr_agent.py) | `sgr_agent` | Structured Output | ❌ Встроен в схему | 6 базовых | 1 | SO Union Type | | [ToolCallingAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/tool_calling_agent.py) | `tool_calling_agent` | ❌ Отсутствует | ❌ Отсутствует | 6 базовых | 1 | FC "required" | | [SGRToolCallingAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/sgr_tool_calling_agent.py) | `sgr_tool_calling_agent` | FC Tool принудительно | ✅ Первый шаг FC | 7 (6 + ReasoningTool) | 2 | FC → FC ЛУЧШИЙ АГЕНТ | +| [DialogAgent](https://github.com/vamplabai/sgr-agent-core/blob/main/sgr_agent_core/agents/dialog_agent.py) | `dialog_agent` | Как SGRToolCallingAgent | ✅ Первый шаг FC | 8 (+ AnswerTool) | 2 | FC → FC Длинные диалоги | diff --git a/examples/sgr_deep_research/README.md b/examples/sgr_deep_research/README.md index 14d96704..88e109c1 100644 --- a/examples/sgr_deep_research/README.md +++ b/examples/sgr_deep_research/README.md @@ -9,6 +9,7 @@ SGR Deep Research contains research agent definitions and configuration files fo - **SGR Agent** - Schema-Guided Reasoning agent for structured research - **Tool Calling Agent** - Function calling agent for research tasks - **SGR Tool Calling Agent** - Hybrid SGR + function calling agent +- **Dialog Agent** - Dialog agent with intermediate results and long conversations All agents include: diff --git a/examples/sgr_deep_research/config.yaml.example b/examples/sgr_deep_research/config.yaml.example index 8cf8c0ac..b3fbf645 100644 --- a/examples/sgr_deep_research/config.yaml.example +++ b/examples/sgr_deep_research/config.yaml.example @@ -5,7 +5,7 @@ llm: api_key: "your-openai-api-key-here" # Your OpenAI API key base_url: "https://api.openai.com/v1" # API base URL - model: "gpt-4o-mini" # Model name + model: "gpt-4.1-mini" # Model name max_tokens: 8000 # Max output tokens temperature: 0.4 # Temperature (0.0-1.0) # proxy: "socks5://127.0.0.1:1081" # Optional proxy (socks5:// or http://) @@ -52,6 +52,8 @@ tools: # base_class defaults to sgr_agent_core.tools.AdaptPlanTool reasoning_tool: # base_class defaults to sgr_agent_core.tools.ReasoningTool + answer_tool: + # base_class defaults to sgr_agent_core.tools.AnswerTool # Agent Definitions agents: @@ -59,7 +61,7 @@ agents: sgr_agent: base_class: "agents.ResearchSGRAgent" llm: - model: "gpt-4o-mini" + model: "gpt-4.1-mini" temperature: 0.4 tools: - "web_search_tool" @@ -74,7 +76,7 @@ agents: tool_calling_agent: base_class: "agents.ResearchToolCallingAgent" llm: - model: "gpt-4o-mini" + model: "gpt-4.1-mini" temperature: 0.4 tools: - "web_search_tool" @@ -89,7 +91,7 @@ agents: sgr_tool_calling_agent: base_class: "agents.ResearchSGRToolCallingAgent" llm: - model: "gpt-4o-mini" + model: "gpt-4.1-mini" temperature: 0.4 tools: - "web_search_tool" @@ -100,3 +102,17 @@ agents: - "reasoning_tool" - "generate_plan_tool" - "adapt_plan_tool" + + # Dialog Agent for research (intermediate results, long conversations) + dialog_agent: + base_class: "agents.ResearchDialogAgent" + llm: + model: "gpt-4.1-mini" + temperature: 0.4 + tools: + - "web_search_tool" + - "extract_page_content_tool" + - "reasoning_tool" + - "answer_tool" + - "generate_plan_tool" + - "adapt_plan_tool" diff --git a/examples/sgr_deep_research/definitions.py b/examples/sgr_deep_research/definitions.py index ca361d82..59a8e7e7 100644 --- a/examples/sgr_deep_research/definitions.py +++ b/examples/sgr_deep_research/definitions.py @@ -8,6 +8,7 @@ import sgr_agent_core.tools as tools from examples.sgr_deep_research.agents import ( + ResearchDialogAgent, ResearchSGRAgent, ResearchSGRToolCallingAgent, ResearchToolCallingAgent, @@ -50,5 +51,11 @@ def get_research_agents_definitions() -> dict[str, AgentDefinition]: tools=DEFAULT_TOOLKIT, prompts=PromptsConfig(system_prompt_file=Path("sgr_agent_core/prompts/research_system_prompt.txt")), ), + AgentDefinition( + name="research_dialog_agent", + base_class=ResearchDialogAgent, + tools=DEFAULT_TOOLKIT, + prompts=PromptsConfig(system_prompt_file=Path("sgr_agent_core/prompts/research_system_prompt.txt")), + ), ] return {agent.name: agent for agent in agents} diff --git a/pyproject.toml b/pyproject.toml index 4fc27ad3..9d7b900d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -63,6 +63,7 @@ Documentation = "https://vamplabai.github.io/sgr-agent-core/" [project.scripts] sgr = "sgr_agent_core.server.__main__:main" +sgrsh = "sgr_agent_core.cli.__main__:main" [project.optional-dependencies] dev = [ diff --git a/pytest.ini b/pytest.ini index 2c9bb221..a105d94d 100644 --- a/pytest.ini +++ b/pytest.ini @@ -1,4 +1,4 @@ -[tool:pytest] +[pytest] testpaths = tests python_files = test_*.py python_classes = Test* @@ -7,7 +7,9 @@ addopts = -v --tb=short --strict-markers + -m "not e2e" markers = slow: marks tests as slow (deselect with '-m "not slow"') integration: marks tests as integration tests unit: marks tests as unit tests + e2e: end-to-end tests (run explicitly with pytest -m e2e) diff --git a/sgr_agent_core/agents/__init__.py b/sgr_agent_core/agents/__init__.py index 3c8e9bfd..63a96f27 100644 --- a/sgr_agent_core/agents/__init__.py +++ b/sgr_agent_core/agents/__init__.py @@ -1,11 +1,13 @@ """Agents module for SGR Agent Core.""" +from sgr_agent_core.agents.dialog_agent import DialogAgent from sgr_agent_core.agents.iron_agent import IronAgent from sgr_agent_core.agents.sgr_agent import SGRAgent from sgr_agent_core.agents.sgr_tool_calling_agent import SGRToolCallingAgent from sgr_agent_core.agents.tool_calling_agent import ToolCallingAgent __all__ = [ + "DialogAgent", "IronAgent", "SGRAgent", "SGRToolCallingAgent", diff --git a/sgr_agent_core/agents/dialog_agent.py b/sgr_agent_core/agents/dialog_agent.py new file mode 100644 index 00000000..dab181ef --- /dev/null +++ b/sgr_agent_core/agents/dialog_agent.py @@ -0,0 +1,75 @@ +"""Dialog agent for long-running conversations with intermediate results.""" + +from typing import Type + +from openai import AsyncOpenAI + +from sgr_agent_core.agent_definition import AgentConfig +from sgr_agent_core.agents.sgr_tool_calling_agent import SGRToolCallingAgent +from sgr_agent_core.models import AgentStatesEnum +from sgr_agent_core.tools import AnswerTool, BaseTool, ClarificationTool +from sgr_agent_core.tools.answer_tool import PASS_TURN_TO_USER_KEY + + +class DialogAgent(SGRToolCallingAgent): + """Agent specialized for dialog interactions with intermediate results. + + Uses AnswerTool to share intermediate results and maintain + conversation flow, keeping the agent available for further + interactions. Supports long dialogs with full conversation history. + + Overrides _execution_step to add _after_action_phase (not in + BaseAgent): tools can signal pass_turn_to_user via context; + ClarificationTool pauses for user input. + """ + + name: str = "dialog_agent" + + def __init__( + self, + task_messages: list, + openai_client: AsyncOpenAI, + agent_config: AgentConfig, + toolkit: list[Type[BaseTool]], + def_name: str | None = None, + **kwargs: dict, + ): + answer_toolkit = [AnswerTool] + merged_toolkit = answer_toolkit + [t for t in toolkit if t is not AnswerTool] + super().__init__( + task_messages=task_messages, + openai_client=openai_client, + agent_config=agent_config, + toolkit=merged_toolkit, + def_name=def_name, + **kwargs, + ) + + async def _execution_step(self): + """Run one step and handle after-action wait (ClarificationTool / + pass_turn_to_user).""" + reasoning = await self._reasoning_phase() + self._context.current_step_reasoning = reasoning + action_tool = await self._select_action_phase(reasoning) + result = await self._action_phase(action_tool) + await self._after_action_phase(action_tool, result) + + async def _after_action_phase(self, action_tool: BaseTool, result: str) -> None: + """Pause for user when ClarificationTool or when tool set + pass_turn_to_user (e.g. AnswerTool).""" + if isinstance(action_tool, ClarificationTool): + self._context.execution_result = result + self.logger.info("\n⏸️ Research paused - please answer questions") + self._context.state = AgentStatesEnum.WAITING_FOR_CLARIFICATION + self.streaming_generator.finish() + self._context.clarification_received.clear() + await self._context.clarification_received.wait() + return + if self._context.custom_context and self._context.custom_context.get(PASS_TURN_TO_USER_KEY): + self._context.custom_context[PASS_TURN_TO_USER_KEY] = False + self._context.execution_result = result + self.logger.info("\n💬 Dialog shared - agent waiting for response") + self._context.state = AgentStatesEnum.WAITING_FOR_CLARIFICATION + self.streaming_generator.finish(result) + self._context.clarification_received.clear() + await self._context.clarification_received.wait() diff --git a/sgr_agent_core/cli/__init__.py b/sgr_agent_core/cli/__init__.py new file mode 100644 index 00000000..f1664e7e --- /dev/null +++ b/sgr_agent_core/cli/__init__.py @@ -0,0 +1 @@ +"""CLI commands for SGR Agent Core.""" diff --git a/sgr_agent_core/cli/__main__.py b/sgr_agent_core/cli/__main__.py new file mode 100644 index 00000000..c611497a --- /dev/null +++ b/sgr_agent_core/cli/__main__.py @@ -0,0 +1,14 @@ +"""Main entry point for sgrsh CLI command.""" + +import asyncio + +from sgr_agent_core.cli.sgrsh import main as async_main + + +def main(): + """Synchronous entry point for sgrsh command.""" + asyncio.run(async_main()) + + +if __name__ == "__main__": + main() diff --git a/sgr_agent_core/cli/sgrsh.py b/sgr_agent_core/cli/sgrsh.py new file mode 100644 index 00000000..770b5b17 --- /dev/null +++ b/sgr_agent_core/cli/sgrsh.py @@ -0,0 +1,271 @@ +#!/usr/bin/env python3 +"""SGR Shell - Interactive CLI for SGR agents. + +Usage: + sgrsh "Your query here" + sgrsh --agent sgr_agent "Your query here" + sgrsh --config-file config.yaml --agent sgr_agent + sgrsh -c config.yaml -a sgr_agent +""" + +import argparse +import asyncio +import logging +import sys +from pathlib import Path +from typing import TYPE_CHECKING + +from sgr_agent_core.agent_config import GlobalConfig +from sgr_agent_core.agent_factory import AgentFactory +from sgr_agent_core.models import AgentStatesEnum + +if TYPE_CHECKING: + from sgr_agent_core.base_agent import BaseAgent + +logger = logging.getLogger(__name__) + + +def _read_user_input(prompt: str) -> str: + """Read user input from buffer and decode as UTF-8 to avoid losing input on + decode errors. + + Using input() can consume the line and then raise + UnicodeDecodeError, so the next readline() would return the + following (often empty) line. Always reading from stdin.buffer and + decoding with errors='replace' ensures we never lose user input. + """ + sys.stdout.write(prompt) + sys.stdout.flush() + line = sys.stdin.buffer.readline() + return line.decode("utf-8", errors="replace").strip() + + +def find_config_file(config_file: str | None) -> Path: + """Find config.yaml in current directory. + + Args: + config_file: Optional explicit config file path + + Returns: + Path to config file or None if not found + """ + path = Path(config_file) if config_file else Path.cwd() / "config.yaml" + if path.exists(): + return path.resolve() + raise FileNotFoundError("Config file not found") + + +async def run_agent(agent: "BaseAgent") -> str | None: + """Run one agent to completion; handle clarification prompts while it runs. + + One agent = one turn. This function does not create or switch agents. + It only waits for the given agent to finish, and when the agent calls + ClarificationTool/AnswerTool, reads user input and feeds it back. + + Args: + agent: Agent instance to run (single turn) + + Returns: + Final result or None + """ + execution_task = asyncio.create_task(agent.execute()) + + try: + # Wait for this agent to finish; only react when it asks for clarification + while not execution_task.done(): + if agent._context.state == AgentStatesEnum.WAITING_FOR_CLARIFICATION: + if agent._context.execution_result: + print("\n" + agent._context.execution_result) + print() + + try: + user_input = _read_user_input("You: ") + except (KeyboardInterrupt, EOFError): + # User pressed Ctrl+C or EOF during input + print("\n\n⚠️ Interrupted by user") + await agent.cancel() + return None + + if user_input: + await agent.provide_clarification([{"role": "user", "content": user_input}]) + else: + # Empty input - cancel execution + await agent.cancel() + return None + + await asyncio.sleep(0.1) + + # Get final result + try: + result = await execution_task + return result + except asyncio.CancelledError: + return None + except Exception as e: + logger.error(f"Agent execution error: {e}") + return None + except KeyboardInterrupt: + # User pressed Ctrl+C during execution + print("\n\n⚠️ Interrupted by user") + await agent.cancel() + return None + + +async def chat_loop(agent_def_name: str, config: GlobalConfig): + """Interactive session: one short-lived agent per user message, shared history. + + Model: 1 agent = 1 turn. Each user message creates a new agent with + conversation_history; that agent runs to completion and exits. The result + is appended to history; next message gets a fresh agent with full context. + No agent reuse or switching mid-session. + + Args: + agent_def_name: Name of agent definition + config: GlobalConfig instance + """ + agent_def = config.agents.get(agent_def_name) + if agent_def is None: + print(f"❌ Agent '{agent_def_name}' not found in config") + print(f"Available agents: {', '.join(config.agents.keys())}") + sys.exit(1) + + print(f"✅ Using agent: {agent_def_name}") + print("Type 'quit' or 'exit' to end the session (or press Ctrl+C)\n") + + conversation_history: list[dict] = [] + + try: + while True: + try: + user_input = _read_user_input("You: ") + except (KeyboardInterrupt, EOFError): + # User pressed Ctrl+C or EOF during input + print("\n\n👋 Goodbye!") + break + + if user_input.lower() in ("quit", "exit", "q"): + break + + if not user_input: + continue + + conversation_history.append({"role": "user", "content": user_input}) + agent = await AgentFactory.create(agent_def, task_messages=conversation_history) + result = await run_agent(agent) + + if result: + print(f"\nAgent: {result}\n") + sys.stdout.flush() + # Add agent response to history + conversation_history.append({"role": "assistant", "content": result}) + else: + print("\nAgent: No response received\n") + sys.stdout.flush() + except KeyboardInterrupt: + # User pressed Ctrl+C during agent execution + print("\n\n👋 Goodbye!") + + +async def main(): + """Main entry point for sgrsh command.""" + parser = argparse.ArgumentParser( + description="SGR Shell - Interactive CLI for SGR agents", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=""" +Examples: + sgrsh "Найди цену биткоина" + sgrsh --agent sgr_agent "What is AI?" + sgrsh -c config.yaml -a sgr_agent + """, + ) + parser.add_argument( + "-c", + "--config-file", + type=str, + default=None, + help="Path to config.yaml file (default: looks for config.yaml in current directory)", + ) + parser.add_argument( + "-a", + "--agent", + type=str, + default=None, + help="Agent name to use (default: first agent in config)", + ) + parser.add_argument( + "query", + nargs="*", + help="Initial query (optional - if not provided, starts interactive chat)", + ) + + args = parser.parse_args() + + # Setup minimal logging + logging.basicConfig( + level=logging.WARNING, + format="%(message)s", + ) + + # Find config file + try: + config_path = find_config_file(args.config_file) + except FileNotFoundError: + print("❌ Config file not found") + if args.config_file: + print(f" Specified path: {args.config_file}") + else: + print(" Looking for: config.yaml in current directory") + sys.exit(1) + + # Load configuration + try: + config = GlobalConfig.from_yaml(str(config_path)) + except Exception as e: + print(f"❌ Failed to load config: {e}") + sys.exit(1) + + # Get agent name (default: dialog_agent if present, else first in config) + agent_name = args.agent + if agent_name is None: + if not config.agents: + print("❌ No agents found in config") + sys.exit(1) + agent_name = "dialog_agent" if "dialog_agent" in config.agents else list(config.agents.keys())[0] + if len(config.agents) > 1: + print(f"ℹ️ Using agent: {agent_name}") + print(f" Available agents: {', '.join(config.agents.keys())}") + + # Check if query provided + query = " ".join(args.query) if args.query else None + + if query: + # Single query mode + agent_def = config.agents.get(agent_name) + if agent_def is None: + print(f"❌ Agent '{agent_name}' not found in config") + print(f"Available agents: {', '.join(config.agents.keys())}") + sys.exit(1) + + # Create agent + task_messages = [{"role": "user", "content": query}] + agent = await AgentFactory.create(agent_def, task_messages) + + # Run agent + result = await run_agent(agent) + + if result: + print(f"\n{result}") + else: + print("\nNo response received") + else: + # Interactive chat mode + await chat_loop(agent_name, config) + + +if __name__ == "__main__": + try: + asyncio.run(main()) + except KeyboardInterrupt: + # User pressed Ctrl+C - exit gracefully + print("\n\n👋 Goodbye!") + sys.exit(0) diff --git a/sgr_agent_core/server/__main__.py b/sgr_agent_core/server/__main__.py index c34a704f..95fd2d95 100644 --- a/sgr_agent_core/server/__main__.py +++ b/sgr_agent_core/server/__main__.py @@ -47,14 +47,14 @@ def load_config(config_file: str, agents_file: str | None = None) -> GlobalConfi def main(): - """Start FastAPI server.""" - args = ServerConfig() + """Start FastAPI server. - setup_logging(args.logging_file) - - load_config(args.config_file, args.agents_file) - - uvicorn.run(app, host=args.host, port=args.port, log_level="info") + Config from ServerConfig (env + CLI, see settings.py). + """ + server_config = ServerConfig() + setup_logging(server_config.logging_file) + load_config(server_config.config_file, server_config.agents_file) + uvicorn.run(app, host=server_config.host, port=server_config.port, log_level="info") if __name__ == "__main__": diff --git a/sgr_agent_core/server/settings.py b/sgr_agent_core/server/settings.py index f26d6a7c..b3ba7d25 100644 --- a/sgr_agent_core/server/settings.py +++ b/sgr_agent_core/server/settings.py @@ -4,19 +4,42 @@ from pathlib import Path import yaml -from pydantic import Field +from pydantic import AliasChoices, Field from pydantic_settings import BaseSettings, SettingsConfigDict logger = logging.getLogger(__name__) class ServerConfig(BaseSettings): + """Server configuration with env and CLI support. + + Short aliases: -c, -l, -a, -p. + """ + model_config = SettingsConfigDict(cli_parse_args=True, cli_kebab_case=True) - logging_file: str = Field(default="logging_config.yaml", description="Logging configuration file path") - config_file: str = Field(default="config.yaml", description="sgr core configuration file path") - agents_file: str | None = Field(default=None, description="Optional agents definitions file path") + logging_file: str = Field( + default="logging_config.yaml", + description="Logging configuration file path", + validation_alias=AliasChoices("l", "logging-file", "logging_file"), + ) + config_file: str = Field( + default="config.yaml", + description="SGR core configuration file path", + validation_alias=AliasChoices("c", "config-file", "config_file"), + ) + agents_file: str | None = Field( + default=None, + description="Optional agents definitions file path", + validation_alias=AliasChoices("a", "agents-file", "agents_file"), + ) host: str = Field(default="0.0.0.0", description="Host to listen on") - port: int = Field(default=8010, gt=0, le=65535, description="Port to listen on") + port: int = Field( + default=8010, + gt=0, + le=65535, + description="Port to listen on", + validation_alias=AliasChoices("p", "port"), + ) def setup_logging(logging_file: str) -> None: diff --git a/sgr_agent_core/tools/__init__.py b/sgr_agent_core/tools/__init__.py index 3f93988f..786e5182 100644 --- a/sgr_agent_core/tools/__init__.py +++ b/sgr_agent_core/tools/__init__.py @@ -5,6 +5,7 @@ ToolNameSelectorStub, ) from sgr_agent_core.tools.adapt_plan_tool import AdaptPlanTool +from sgr_agent_core.tools.answer_tool import AnswerTool from sgr_agent_core.tools.clarification_tool import ClarificationTool from sgr_agent_core.tools.create_report_tool import CreateReportTool from sgr_agent_core.tools.extract_page_content_tool import ExtractPageContentTool @@ -27,6 +28,7 @@ "ExtractPageContentTool", "AdaptPlanTool", "CreateReportTool", + "AnswerTool", "FinalAnswerTool", "ReasoningTool", # Tool lists diff --git a/sgr_agent_core/tools/answer_tool.py b/sgr_agent_core/tools/answer_tool.py new file mode 100644 index 00000000..9a4995c6 --- /dev/null +++ b/sgr_agent_core/tools/answer_tool.py @@ -0,0 +1,49 @@ +"""Answer tool for sharing intermediate results and keeping agent available for +further interaction.""" + +from __future__ import annotations + +from typing import TYPE_CHECKING + +from pydantic import Field + +from sgr_agent_core.base_tool import BaseTool + +if TYPE_CHECKING: + from sgr_agent_core.agent_definition import AgentConfig + from sgr_agent_core.models import AgentContext + +# Key in context.custom_context to signal "pass turn to user" (used by DialogAgent) +PASS_TURN_TO_USER_KEY = "pass_turn_to_user" + + +class AnswerTool(BaseTool): + """Share intermediate results and keep agent available for further + interaction. + + Use this tool to share progress updates, partial findings, or intermediate + results with the user while keeping the agent active for continued conversation. + Keep all fields concise - brief reasoning and clear intermediate result. + """ + + reasoning: str = Field( + description="Why this intermediate result is being shared (1-2 sentences MAX)", + max_length=200, + ) + intermediate_result: str = Field( + description="The intermediate result or progress update to share with the user (clear and informative)", + min_length=10, + max_length=2000, + ) + continue_research: bool = Field( + default=True, + description="Whether to continue research after sharing this result (default: True)", + ) + + async def __call__(self, context: AgentContext, config: AgentConfig, **_) -> str: + """Return the intermediate result and signal agent to pass turn to + user.""" + if context.custom_context is None: + context.custom_context = {} + context.custom_context[PASS_TURN_TO_USER_KEY] = True + return self.intermediate_result diff --git a/tests/test_agent_config_integration.py b/tests/test_agent_config_integration.py index 3af478c3..690040dc 100644 --- a/tests/test_agent_config_integration.py +++ b/tests/test_agent_config_integration.py @@ -128,6 +128,18 @@ def test_server_config_defaults(self): assert config.host == "0.0.0.0" assert config.port == 8010 + def test_server_config_from_cli_short_aliases(self): + """Test that ServerConfig accepts short CLI aliases (-c, -l, -a, + -p).""" + original_argv = sys.argv + try: + sys.argv = ["prog", "-c", "my_config.yaml", "-p", "9000"] + config = ServerConfig() + assert config.config_file == "my_config.yaml" + assert config.port == 9000 + finally: + sys.argv = original_argv + def test_server_config_from_environment(self): """Test that ServerConfig reads from environment variables.""" original_argv = sys.argv diff --git a/tests/test_agent_e2e.py b/tests/test_agent_e2e.py index 7e4d5344..d831c604 100644 --- a/tests/test_agent_e2e.py +++ b/tests/test_agent_e2e.py @@ -1,4 +1,7 @@ -"""End-to-end tests for agent execution workflow.""" +"""End-to-end tests for agent execution workflow. + +Run explicitly: pytest -m e2e +""" from typing import Type from unittest.mock import Mock @@ -14,6 +17,8 @@ from sgr_agent_core.next_step_tool import NextStepToolsBuilder from sgr_agent_core.tools import AdaptPlanTool, FinalAnswerTool, ReasoningTool +pytestmark = pytest.mark.e2e + class MockStream: """Mock OpenAI stream object that emulates OpenAI streaming API. diff --git a/tests/test_agent_factory.py b/tests/test_agent_factory.py index 75a08ceb..a7c79420 100644 --- a/tests/test_agent_factory.py +++ b/tests/test_agent_factory.py @@ -19,6 +19,7 @@ ) from sgr_agent_core.agent_factory import AgentFactory from sgr_agent_core.agents import ( + DialogAgent, SGRAgent, SGRToolCallingAgent, ToolCallingAgent, @@ -89,6 +90,7 @@ async def test_create_all_agent_types(self): ): task = "Universal test task" agent_classes = [ + DialogAgent, SGRAgent, SGRToolCallingAgent, ToolCallingAgent, diff --git a/tests/test_cli.py b/tests/test_cli.py new file mode 100644 index 00000000..f0b8b0ee --- /dev/null +++ b/tests/test_cli.py @@ -0,0 +1,355 @@ +"""Tests for CLI command sgrsh.""" + +import asyncio +import sys +from unittest.mock import AsyncMock, Mock, patch + +import pytest + +from sgr_agent_core.cli.sgrsh import chat_loop, find_config_file, main, run_agent +from sgr_agent_core.models import AgentStatesEnum + + +class TestFindConfigFile: + """Test find_config_file function.""" + + def test_find_config_file_explicit_path_exists(self, tmp_path): + """Test finding config file with explicit path.""" + config_file = tmp_path / "test_config.yaml" + config_file.write_text("test: config") + + result = find_config_file(str(config_file)) + assert result == config_file.resolve() + + def test_find_config_file_explicit_path_not_exists(self, tmp_path): + """Test finding config file with explicit non-existent path raises.""" + config_file = tmp_path / "nonexistent.yaml" + + with pytest.raises(FileNotFoundError, match="Config file not found"): + find_config_file(str(config_file)) + + def test_find_config_file_current_directory(self, tmp_path, monkeypatch): + """Test finding config.yaml in current directory.""" + monkeypatch.chdir(tmp_path) + config_file = tmp_path / "config.yaml" + config_file.write_text("test: config") + + result = find_config_file(None) + assert result == config_file.resolve() + + def test_find_config_file_not_found(self, tmp_path, monkeypatch): + """Test when config.yaml not found in current directory raises.""" + monkeypatch.chdir(tmp_path) + + with pytest.raises(FileNotFoundError, match="Config file not found"): + find_config_file(None) + + +class TestRunAgent: + """Test run_agent function.""" + + @pytest.mark.asyncio + async def test_run_agent_success(self): + """Test successful agent execution.""" + mock_agent = Mock() + mock_agent.execute = AsyncMock(return_value="Test result") + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.COMPLETED + mock_agent.log = [] + + result = await run_agent(mock_agent) + + assert result == "Test result" + mock_agent.execute.assert_called_once() + + @pytest.mark.asyncio + async def test_run_agent_with_clarification(self, monkeypatch): + """Test agent execution with clarification request.""" + mock_agent = Mock() + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.WAITING_FOR_CLARIFICATION + mock_agent._context.execution_result = "Question 1?\nQuestion 2?" + mock_agent.log = [] + mock_agent.provide_clarification = AsyncMock() + mock_agent.cancel = AsyncMock() + + # Track if clarification was provided + clarification_provided = False + + # Mock execute to simulate waiting for clarification + async def mock_execute(): + nonlocal clarification_provided + # First call - waiting for clarification + if mock_agent._context.state == AgentStatesEnum.WAITING_FOR_CLARIFICATION: + await asyncio.sleep(0.1) # Simulate waiting + # After clarification provided, return result + if clarification_provided: + return "Final result" + # Otherwise keep waiting + await asyncio.sleep(0.1) + return "Final result" + + mock_agent.execute = AsyncMock(side_effect=mock_execute) + + # Mock user input - return answer once (patch _read_user_input, not input) + user_input_called = False + + def mock_read_user_input(prompt: str) -> str: + nonlocal user_input_called, clarification_provided + if not user_input_called: + user_input_called = True + asyncio.create_task(mock_agent.provide_clarification([{"role": "user", "content": "User answer"}])) + mock_agent._context.state = AgentStatesEnum.COMPLETED + clarification_provided = True + return "User answer" + return "" + + monkeypatch.setattr("sgr_agent_core.cli.sgrsh._read_user_input", mock_read_user_input) + + result = await run_agent(mock_agent) + + # Should eventually get result after clarification + assert result == "Final result" or result is None # Allow None if timing issues + + @pytest.mark.asyncio + async def test_run_agent_cancel_on_empty_input(self, monkeypatch): + """Test agent cancellation on empty clarification input.""" + mock_agent = Mock() + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.WAITING_FOR_CLARIFICATION + mock_agent._context.execution_result = "Question?" + mock_agent.log = [] + mock_agent.provide_clarification = AsyncMock() + mock_agent.cancel = AsyncMock() + + async def mock_execute(): + await asyncio.sleep(0.1) + return "Result" + + mock_agent.execute = AsyncMock(side_effect=mock_execute) + + # Mock empty user input (patch _read_user_input, not input) + monkeypatch.setattr("sgr_agent_core.cli.sgrsh._read_user_input", lambda _: "") + + result = await run_agent(mock_agent) + + assert result is None + mock_agent.cancel.assert_called_once() + + @pytest.mark.asyncio + async def test_run_agent_execution_error(self): + """Test agent execution error handling.""" + mock_agent = Mock() + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.COMPLETED + mock_agent.log = [] + mock_agent.execute = AsyncMock(side_effect=Exception("Test error")) + + result = await run_agent(mock_agent) + + assert result is None + + @pytest.mark.asyncio + async def test_run_agent_keyboard_interrupt_during_input(self, monkeypatch): + """Test agent cancellation on KeyboardInterrupt during user input.""" + mock_agent = Mock() + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.WAITING_FOR_CLARIFICATION + mock_agent._context.execution_result = "Question?" + mock_agent.log = [] + mock_agent.provide_clarification = AsyncMock() + mock_agent.cancel = AsyncMock() + + async def mock_execute(): + await asyncio.sleep(0.1) + return "Result" + + mock_agent.execute = AsyncMock(side_effect=mock_execute) + + # Mock _read_user_input to raise KeyboardInterrupt + def mock_read_user_input_raise_interrupt(_: str) -> str: + raise KeyboardInterrupt() + + monkeypatch.setattr("sgr_agent_core.cli.sgrsh._read_user_input", mock_read_user_input_raise_interrupt) + + result = await run_agent(mock_agent) + + assert result is None + mock_agent.cancel.assert_called_once() + + +class TestChatLoopMultipleRequests: + """Test chat_loop with multiple user requests (conversation history).""" + + @pytest.mark.asyncio + async def test_chat_loop_multiple_requests_then_quit(self, monkeypatch): + """Test that multiple requests are sent to the agent and history + grows.""" + inputs = iter(["First request", "Second request", "quit"]) + + def mock_read_user_input(prompt: str) -> str: + return next(inputs) + + create_calls = [] + + async def mock_create(agent_def, *, task_messages): + create_calls.append(list(task_messages)) + mock_agent = Mock() + # Return different result per call: first request -> first response, etc. + if len(create_calls) == 1: + mock_agent.execute = AsyncMock(return_value="First response") + else: + mock_agent.execute = AsyncMock(return_value="Second response") + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.COMPLETED + mock_agent.log = [] + return mock_agent + + mock_config = Mock() + mock_config.agents = {"test_agent": Mock()} + + with ( + patch("sgr_agent_core.cli.sgrsh._read_user_input", side_effect=mock_read_user_input), + patch("sgr_agent_core.cli.sgrsh.AgentFactory") as mock_factory, + ): + mock_factory.create = mock_create + await chat_loop("test_agent", mock_config) + + assert len(create_calls) == 2 + assert create_calls[0] == [{"role": "user", "content": "First request"}] + assert create_calls[1] == [ + {"role": "user", "content": "First request"}, + {"role": "assistant", "content": "First response"}, + {"role": "user", "content": "Second request"}, + ] + + +class TestMain: + """Test main CLI function.""" + + @pytest.mark.asyncio + async def test_main_with_query(self, tmp_path, monkeypatch): + """Test main function with query argument.""" + monkeypatch.chdir(tmp_path) + config_file = tmp_path / "config.yaml" + config_file.write_text( + """ +llm: + api_key: "test-key" + base_url: "https://api.test.com/v1" + model: "test-model" + +agents: + test_agent: + base_class: "sgr_agent_core.agents.sgr_agent.SGRAgent" + tools: + - "final_answer_tool" +""" + ) + + with ( + patch("sgr_agent_core.cli.sgrsh.GlobalConfig") as mock_config_class, + patch("sgr_agent_core.cli.sgrsh.AgentFactory") as mock_factory, + ): + mock_config = Mock() + mock_config.agents = {"test_agent": Mock()} + mock_config_class.from_yaml.return_value = mock_config + + mock_agent = Mock() + mock_agent.execute = AsyncMock(return_value="Test result") + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.COMPLETED + mock_agent.log = [] + mock_factory.create = AsyncMock(return_value=mock_agent) + + # Mock sys.argv + original_argv = sys.argv + sys.argv = ["sgrsh", "Test query"] + + try: + await main() + except SystemExit: + pass + finally: + sys.argv = original_argv + + mock_factory.create.assert_called_once() + + @pytest.mark.asyncio + async def test_main_no_config_file(self, tmp_path, monkeypatch, capsys): + """Test main function when config file not found.""" + monkeypatch.chdir(tmp_path) + + original_argv = sys.argv + sys.argv = ["sgrsh", "Test query"] + + try: + await main() + except SystemExit as e: + assert e.code == 1 + finally: + sys.argv = original_argv + + captured = capsys.readouterr() + assert "Config file not found" in captured.out + + @pytest.mark.asyncio + async def test_main_with_agent_option(self, tmp_path, monkeypatch): + """Test main function with --agent option.""" + monkeypatch.chdir(tmp_path) + config_file = tmp_path / "config.yaml" + config_file.write_text( + """ +llm: + api_key: "test-key" + base_url: "https://api.test.com/v1" + model: "test-model" + +agents: + agent1: + base_class: "sgr_agent_core.agents.sgr_agent.SGRAgent" + tools: [] + agent2: + base_class: "sgr_agent_core.agents.sgr_agent.SGRAgent" + tools: [] +""" + ) + + with ( + patch("sgr_agent_core.cli.sgrsh.GlobalConfig") as mock_config_class, + patch("sgr_agent_core.cli.sgrsh.AgentFactory") as mock_factory, + ): + mock_config = Mock() + mock_config.agents = { + "agent1": Mock(), + "agent2": Mock(), + } + mock_config_class.from_yaml.return_value = mock_config + + mock_agent = Mock() + mock_agent.execute = AsyncMock(return_value="Test result") + mock_agent._context = Mock() + mock_agent._context.state = AgentStatesEnum.COMPLETED + mock_agent.log = [] + mock_factory.create = AsyncMock(return_value=mock_agent) + + original_argv = sys.argv + sys.argv = ["sgrsh", "--agent", "agent2", "Test query"] + + try: + await main() + except SystemExit: + pass + finally: + sys.argv = original_argv + + # Check that agent2 was used + assert mock_factory.create.called + # Check that correct agent was passed + # AgentFactory.create is called with agent_def as first positional argument + call_args = mock_factory.create.call_args + # Check first positional argument (agent_def) + if call_args.args and len(call_args.args) > 0: + assert call_args.args[0] == mock_config.agents["agent2"] + elif call_args.kwargs: + assert call_args.kwargs.get("agent_def") == mock_config.agents["agent2"] diff --git a/tests/test_dialog_agent.py b/tests/test_dialog_agent.py new file mode 100644 index 00000000..fbb78771 --- /dev/null +++ b/tests/test_dialog_agent.py @@ -0,0 +1,134 @@ +"""Tests for DialogAgent and dialog flow.""" + +from unittest.mock import MagicMock, patch + +import pytest + +from sgr_agent_core.agent_definition import ( + AgentDefinition, + ExecutionConfig, + LLMConfig, + PromptsConfig, +) +from sgr_agent_core.agent_factory import AgentFactory +from sgr_agent_core.agents import DialogAgent +from sgr_agent_core.models import AgentStatesEnum +from sgr_agent_core.tools import AnswerTool +from sgr_agent_core.tools.answer_tool import PASS_TURN_TO_USER_KEY + + +def mock_global_config(): + """Create a mock GlobalConfig for tests.""" + mock_config = MagicMock() + mock_config.llm = LLMConfig(api_key="default-key", base_url="https://api.openai.com/v1") + mock_config.prompts = PromptsConfig( + system_prompt_str="Default system prompt", + initial_user_request_str="Default initial request", + clarification_response_str="Default clarification response", + ) + mock_config.execution = ExecutionConfig() + mock_config.search = None + mock_mcp = MagicMock() + mock_mcp.model_copy.return_value = mock_mcp + mock_mcp.model_dump.return_value = {} + mock_config.mcp = mock_mcp + return patch("sgr_agent_core.agent_config.GlobalConfig", return_value=mock_config) + + +class TestDialogAgentCreation: + """Test DialogAgent creation and toolkit.""" + + @pytest.mark.asyncio + async def test_create_dialog_agent_from_definition(self): + """Test creating DialogAgent from AgentDefinition.""" + with ( + patch("sgr_agent_core.agent_factory.MCP2ToolConverter.build_tools_from_mcp", return_value=[]), + mock_global_config(), + ): + agent_def = AgentDefinition( + name="dialog_agent", + base_class=DialogAgent, + tools=["reasoningtool"], + llm={"api_key": "test-key", "base_url": "https://api.openai.com/v1"}, + prompts={ + "system_prompt_str": "Test system prompt", + "initial_user_request_str": "Test initial request", + "clarification_response_str": "Test clarification response", + }, + execution={}, + ) + agent = await AgentFactory.create(agent_def, task_messages=[{"role": "user", "content": "Test task"}]) + + assert isinstance(agent, DialogAgent) + assert agent.name == "dialog_agent" + assert AnswerTool in agent.toolkit + # AnswerTool should be first, then other tools from config + assert agent.toolkit[0] is AnswerTool + + @pytest.mark.asyncio + async def test_dialog_agent_includes_tools_from_registry(self): + """Test DialogAgent merges AnswerTool with tools from definition.""" + with ( + patch("sgr_agent_core.agent_factory.MCP2ToolConverter.build_tools_from_mcp", return_value=[]), + mock_global_config(), + ): + agent_def = AgentDefinition( + name="dialog_agent", + base_class=DialogAgent, + tools=["reasoningtool", "finalanswertool"], + llm={"api_key": "test-key", "base_url": "https://api.openai.com/v1"}, + prompts={ + "system_prompt_str": "Test", + "initial_user_request_str": "Test", + "clarification_response_str": "Test", + }, + execution={}, + ) + agent = await AgentFactory.create(agent_def, task_messages=[{"role": "user", "content": "Test"}]) + + assert AnswerTool in agent.toolkit + assert len(agent.toolkit) >= 2 + + +class TestDialogAgentAfterActionPhase: + """Test _after_action_phase hook for AnswerTool.""" + + @pytest.mark.asyncio + async def test_after_action_phase_waits_for_answer_tool(self): + """Test that after AnswerTool execution agent sets + WAITING_FOR_CLARIFICATION and waits.""" + import asyncio + + with ( + patch("sgr_agent_core.agent_factory.MCP2ToolConverter.build_tools_from_mcp", return_value=[]), + mock_global_config(), + ): + agent_def = AgentDefinition( + name="dialog_agent", + base_class=DialogAgent, + tools=["reasoningtool", "answertool"], + llm={"api_key": "test-key", "base_url": "https://api.openai.com/v1"}, + prompts={ + "system_prompt_str": "Test", + "initial_user_request_str": "Test", + "clarification_response_str": "Test", + }, + execution=ExecutionConfig(max_iterations=5), + ) + agent = await AgentFactory.create(agent_def, task_messages=[{"role": "user", "content": "Hello"}]) + agent._context.custom_context = {PASS_TURN_TO_USER_KEY: True} + + tool = AnswerTool( + reasoning="Sharing progress", + intermediate_result="Here is an update.", + ) + + async def release_wait(): + await asyncio.sleep(0.05) + agent._context.clarification_received.set() + + waiter = asyncio.create_task(agent._after_action_phase(tool, "Here is an update.")) + asyncio.create_task(release_wait()) + await waiter + + assert agent._context.state == AgentStatesEnum.WAITING_FOR_CLARIFICATION diff --git a/tests/test_tools.py b/tests/test_tools.py index 5beb0e23..8585b053 100644 --- a/tests/test_tools.py +++ b/tests/test_tools.py @@ -12,6 +12,7 @@ from sgr_agent_core.agent_definition import SearchConfig from sgr_agent_core.tools import ( AdaptPlanTool, + AnswerTool, ClarificationTool, CreateReportTool, ExtractPageContentTool, @@ -116,6 +117,32 @@ def test_create_report_tool_initialization(self): assert tool.tool_name == "createreporttool" assert tool.title == "Test Report" + def test_answer_tool_initialization(self): + """Test AnswerTool initialization.""" + tool = AnswerTool( + reasoning="Sharing progress", + intermediate_result="Found 3 relevant sources so far.", + continue_research=True, + ) + assert tool.tool_name == "answertool" + assert tool.reasoning == "Sharing progress" + assert tool.intermediate_result == "Found 3 relevant sources so far." + assert tool.continue_research is True + + +class TestAnswerToolExecution: + """Tests for AnswerTool execution.""" + + @pytest.mark.asyncio + async def test_answer_tool_returns_intermediate_result(self): + """Test AnswerTool __call__ returns intermediate_result.""" + tool = AnswerTool( + reasoning="Progress update", + intermediate_result="Partial findings: X and Y.", + ) + result = await tool(MagicMock(), MagicMock()) + assert result == "Partial findings: X and Y." + class TestToolsConfigReading: """Test that tools that need config can read it correctly."""