Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
267 changes: 258 additions & 9 deletions Cargo.lock

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ members = [
"llm_client",
"llm_prompts",
"logging",
"agent_repl",
]
resolver = "2"

Expand Down
23 changes: 23 additions & 0 deletions agent_repl/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[package]
name = "agent_repl"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1.28", features = ["full"] }
clap = { version = "4.3", features = ["derive"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
colored = "2.0"
rustyline = "12.0"
reqwest = { version = "0.11", features = ["json"] }
async-trait = "0.1"
futures = "0.3"
regex = "1.8"
llm_client = { path = "../llm_client" }
sidecar = { path = "../sidecar" }
uuid = { version = "1.3", features = ["v4", "serde"] }
either = "1.8"
log = "0.4"
env_logger = "0.10"
129 changes: 129 additions & 0 deletions agent_repl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Agent REPL

A REPL-like CLI tool for interacting with an AI agent that can analyze and modify code repositories.

## Features

- Point the agent to any repository
- Run queries against the repository
- Watch the agent's thought process and tool usage in real-time
- Track token usage
- Monitor files opened and edited
- Provide feedback to the agent
- Stop the agent at any point
- Set timeout for agent operations
- Select different LLM models

## Installation

```bash
cargo build --release
```

The binary will be available at `target/release/agent_repl`.

## Usage

```bash
# Run with a repository path
agent_repl --repo-path /path/to/repository --api-key your_api_key --timeout 300 --model claude-sonnet

# Or set these values in the REPL
agent_repl
```

## REPL Commands

- `repo <path>` - Set the repository path
- `key <api_key>` - Set the API key
- `timeout <seconds>` - Set the timeout in seconds (default: 300)
- `model <model_name>` - Set the LLM model to use
- `run <query>` - Run the agent with the given query
- `stop` - Stop the agent
- `feedback <message>` - Provide feedback to the agent
- `status` - Show the current agent status
- `help` - Show the help message
- `exit` - Exit the REPL

## API Keys

The agent supports different API keys for different LLM providers:

- **Default API Key**: Used for OpenAI models (GPT-4, GPT-4o)
- **OpenRouter API Key**: Used for models accessed through OpenRouter
- **Anthropic API Key**: Used for Claude models (Claude Sonnet, Claude Haiku, Claude Opus)

You can set these API keys in several ways:
1. As command-line arguments: `--api-key`, `--openrouter-api-key`, `--anthropic-api-key`
2. As environment variables: `LLM_API_KEY`, `OPENROUTER_API_KEY`, `ANTHROPIC_API_KEY`
3. In the REPL: `key`, `openrouter_key`, `anthropic_key` commands

The agent will automatically use the appropriate API key based on the selected model.

## Available Models

- `claude-sonnet` - Claude Sonnet model from Anthropic
- `claude-haiku` - Claude Haiku model from Anthropic
- `claude-opus` - Claude Opus model from Anthropic
- `gpt-4` - GPT-4 model from OpenAI
- `gpt-4o` - GPT-4o model from OpenAI
- `gemini-pro` - Gemini Pro model from Google
- Custom model names can also be provided

## Example

```
$ agent_repl
Welcome to the Agent REPL!
Type 'help' for a list of commands, 'exit' to quit
agent> repo /path/to/repository
Repository path set to: /path/to/repository
agent> key your_api_key
API key set
agent> timeout 600
Timeout set to: 600s
agent> model claude-sonnet
LLM model set to: claude-sonnet
agent> run Add error handling to the main function
Using tool: ListFiles
Thinking: I need to understand the repository structure first. Let me list the files.
Tool result: /path/to/repository/src/main.rs
/path/to/repository/src/lib.rs
/path/to/repository/Cargo.toml

Using tool: SearchFileContentWithRegex
Thinking: Now I need to find files that might be relevant to the query.
Tool result: /path/to/repository/src/main.rs:10: fn main() {
/path/to/repository/src/main.rs:11: let result = do_something();
/path/to/repository/src/main.rs:12: println!("Result: {}", result);
/path/to/repository/src/main.rs:13: }

Token usage: 300 tokens (total: 300)
...
```

## How It Works

The agent follows these steps:

1. Analyzes the repository structure
2. Identifies relevant files
3. Reads and understands the code
4. Makes necessary changes
5. Verifies the changes work as expected
6. Provides a summary of what was done

This process mirrors the agent loop in the sidecar codebase, where the agent repeatedly:
- Selects the next tool to use
- Executes the tool
- Processes the result
- Continues until the task is complete

## Implementation Details

This tool integrates with the sidecar codebase to leverage its agent loop implementation:

- Uses the `LLMBroker` from the llm_client crate to interact with various LLM providers
- Uses the `ToolType` enum from the sidecar crate to ensure compatibility
- Implements timeout settings to prevent the agent from running indefinitely
- Supports multiple LLM models through a unified interface
38 changes: 38 additions & 0 deletions agent_repl/build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#!/bin/bash

# Build the agent REPL
echo "Building agent_repl..."
cargo build --release

# Check if the build was successful
if [ $? -eq 0 ]; then
echo "Build successful!"
echo "The binary is available at target/release/agent_repl"
echo ""
echo "To run the agent REPL, use:"
echo "./target/release/agent_repl"
echo ""
echo "Or with arguments:"
echo "./target/release/agent_repl --repo-path /path/to/repository --api-key your_api_key --openrouter-api-key your_openrouter_api_key --anthropic-api-key your_anthropic_api_key --timeout 300 --model claude-sonnet"
echo ""
echo "Available models:"
echo " - claude-sonnet"
echo " - claude-haiku"
echo " - claude-opus"
echo " - gpt-4"
echo " - gpt-4o"
echo " - gemini-pro"
echo " - [custom model name]"
echo ""
echo "API Keys:"
echo " - Default API Key (--api-key): Used for OpenAI models"
echo " - OpenRouter API Key (--openrouter-api-key): Used for models accessed through OpenRouter"
echo " - Anthropic API Key (--anthropic-api-key): Used for Claude models"
echo ""
echo "You can also set API keys using environment variables:"
echo " - LLM_API_KEY: Default API key"
echo " - OPENROUTER_API_KEY: OpenRouter API key"
echo " - ANTHROPIC_API_KEY: Anthropic API key"
else
echo "Build failed!"
fi
Loading