Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 36 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,39 @@
# Devin.cursorrules
# Transform your $20 Cursor into a Devin-like AI Assistant

Transform your $20 Cursor/Windsurf into a Devin-like experience in one minute! This repository contains configuration files and tools that enhance your Cursor or Windsurf IDE with advanced agentic AI capabilities similar to Devin, including:
This repository gives you everything needed to supercharge your Cursor or Windsurf IDE with **advanced** agentic AI capabilities similar to the $500/month Devin—but at a fraction of the cost. In under a minute, you'll gain:

- Process planning and self-evolution
- Extended tool usage (web browsing, search, LLM-powered analysis)
- Automated execution (for Windsurf in Docker containers)
* Automated planning and self-evolution, so your AI "thinks before it acts" and learns from mistakes
* Extended tool usage, including web browsing, search engine queries, and LLM-driven text/image analysis
* [Experimental] Multi-agent collaboration, with o1 doing the planning, and regular Claude/GPT-4o doing the execution.

## Why This Matters

Devin impressed many by acting like an intern who writes its own plan, updates that plan as it progresses, and even evolves based on your feedback. But you don't need Devin's $500/month subscription to get most of that functionality. By customizing the .cursorrules file, plus a few Python scripts, you'll unlock the same advanced features inside Cursor.

## Key Highlights

1. Easy Setup

Copy the provided config files into your project folder. Cursor users only need the .cursorrules file. It takes about a minute, and you'll see the difference immediately.

2. Planner-Executor Multi-Agent (Experimental)

Our new [multi-agent branch](https://github.com/grapeot/devin.cursorrules/tree/multi-agent) introduces a high-level Planner (powered by o1) that coordinates complex tasks, and an Executor (powered by Claude/GPT) that implements step-by-step actions. This two-agent approach drastically improves solution quality, cross-checking, and iteration speed.

3. Extended Toolset

Includes:

* Web scraping (Playwright)
* Search engine integration (DuckDuckGo)
* LLM-powered analysis

The AI automatically decides how and when to use them (just like Devin).

4. Self-Evolution

Whenever you correct the AI, it can update its "lessons learned" in .cursorrules. Over time, it accumulates project-specific knowledge and gets smarter with each iteration. It makes AI a coachable and coach-worthy partner.

## Usage

1. Copy all files from this repository to your project folder
Expand Down Expand Up @@ -100,11 +128,6 @@ python -m playwright install chromium
- Search engine integration (DuckDuckGo)
- LLM-powered text analysis
- Process planning and self-reflection capabilities
- Token and cost tracking for LLM API calls
- Supports OpenAI (o1, gpt-4o) and Anthropic (Claude-3.5) models
- Tracks token usage, costs, and thinking time
- Provides session-based tracking with detailed statistics
- Command-line interface for viewing usage statistics

## Testing

Expand All @@ -115,9 +138,11 @@ The project includes comprehensive unit tests for all tools. To run the tests:
source venv/bin/activate # On Windows: .\venv\Scripts\activate

# Run all tests
PYTHONPATH=. python -m unittest discover tests/
PYTHONPATH=. pytest -v tests/
```

Note: Use `-v` flag to see detailed test output including why tests were skipped (e.g. missing API keys)

The test suite includes:
- Search engine tests (DuckDuckGo integration)
- Web scraper tests (Playwright-based scraping)
Expand Down
9 changes: 4 additions & 5 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,17 +21,16 @@ google-generativeai
# gRPC, for Google Generative AI preventing WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
grpcio==1.60.1

# Financial data and visualization
# Data processing and visualization
yfinance>=0.2.36
pandas>=2.1.4
matplotlib>=3.8.2
seaborn>=0.13.1

# UUID
uuid

# Tabulate for pretty-printing tables
tabulate

# Added from the code block
# Utilities
aiohttp==3.9.3
requests>=2.28.0
uuid
74 changes: 74 additions & 0 deletions tests/test_llm_api_live.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
import unittest
import os
from tools.llm_api import query_llm, load_environment
from tests.test_utils import (
requires_openai,
requires_anthropic,
requires_azure,
requires_deepseek,
requires_gemini
)
import pytest

class TestLLMAPILive(unittest.TestCase):
def setUp(self):
self.original_env = dict(os.environ)
load_environment() # Load environment variables from .env files

def tearDown(self):
os.environ.clear()
os.environ.update(self.original_env)

def _test_llm_response(self, provider: str, response: str):
"""Helper to test LLM response with common assertions"""
self.assertIsNotNone(response, f"Response from {provider} was None")
self.assertIsInstance(response, str, f"Response from {provider} was not a string")
self.assertTrue(len(response) > 0, f"Response from {provider} was empty")

@requires_openai
def test_openai_live(self):
"""Live test of OpenAI integration"""
try:
response = query_llm("Say 'test'", provider="openai")
self._test_llm_response("OpenAI", response)
except Exception as e:
pytest.skip(f"OpenAI API error: {str(e)}")

@requires_anthropic
def test_anthropic_live(self):
"""Live test of Anthropic integration"""
try:
response = query_llm("Say 'test'", provider="anthropic")
self._test_llm_response("Anthropic", response)
except Exception as e:
pytest.skip(f"Anthropic API error: {str(e)}")

@requires_azure
def test_azure_live(self):
"""Live test of Azure OpenAI integration"""
try:
response = query_llm("Say 'test'", provider="azure")
self._test_llm_response("Azure", response)
except Exception as e:
pytest.skip(f"Azure API error: {str(e)}")

@requires_deepseek
def test_deepseek_live(self):
"""Live test of DeepSeek integration"""
try:
response = query_llm("Say 'test'", provider="deepseek")
self._test_llm_response("DeepSeek", response)
except Exception as e:
pytest.skip(f"DeepSeek API error: {str(e)}")

@requires_gemini
def test_gemini_live(self):
"""Live test of Gemini integration"""
try:
response = query_llm("Say 'test'", provider="gemini")
self._test_llm_response("Gemini", response)
except Exception as e:
pytest.skip(f"Gemini API error: {str(e)}")

if __name__ == '__main__':
unittest.main()
68 changes: 68 additions & 0 deletions tests/test_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
import os
import pytest
from tools.llm_api import load_environment

# Load environment at module level to ensure it's available for skip checks
load_environment()

# Example values from .env.example that indicate unconfigured keys
EXAMPLE_VALUES = {
'OPENAI_API_KEY': 'your_openai_api_key_here',
'ANTHROPIC_API_KEY': 'your_anthropic_api_key_here',
'DEEPSEEK_API_KEY': 'your_deepseek_api_key_here',
'GOOGLE_API_KEY': 'your_google_api_key_here',
'AZURE_OPENAI_API_KEY': 'your_azure_openai_api_key_here',
'AZURE_OPENAI_MODEL_DEPLOYMENT': 'gpt-4o-ms'
}

def get_skip_reason(env_var: str) -> str:
"""Get a descriptive reason why the test was skipped"""
value = os.getenv(env_var, '').strip()
if not value:
return f"{env_var} is not set in environment"
if value == EXAMPLE_VALUES.get(env_var, ''):
return f"{env_var} is still set to example value: {value}"
return f"{env_var} is not properly configured"

def is_unconfigured(env_var: str) -> bool:
"""Check if an environment variable is unset or set to its example value"""
value = os.getenv(env_var, '').strip()
return not value or value == EXAMPLE_VALUES.get(env_var, '')

def requires_openai(func):
return pytest.mark.skipif(
is_unconfigured('OPENAI_API_KEY'),
reason=get_skip_reason('OPENAI_API_KEY')
)(func)

def requires_anthropic(func):
return pytest.mark.skipif(
is_unconfigured('ANTHROPIC_API_KEY'),
reason=get_skip_reason('ANTHROPIC_API_KEY')
)(func)

def requires_azure(func):
key_reason = get_skip_reason('AZURE_OPENAI_API_KEY')
deploy_reason = get_skip_reason('AZURE_OPENAI_MODEL_DEPLOYMENT')
return pytest.mark.skipif(
is_unconfigured('AZURE_OPENAI_API_KEY') or is_unconfigured('AZURE_OPENAI_MODEL_DEPLOYMENT'),
reason=f"Azure OpenAI not configured: {key_reason} and {deploy_reason}"
)(func)

def requires_deepseek(func):
return pytest.mark.skipif(
is_unconfigured('DEEPSEEK_API_KEY'),
reason=get_skip_reason('DEEPSEEK_API_KEY')
)(func)

def requires_gemini(func):
return pytest.mark.skipif(
is_unconfigured('GOOGLE_API_KEY'),
reason=get_skip_reason('GOOGLE_API_KEY')
)(func)

def requires_openai_o1(func):
return pytest.mark.skipif(
is_unconfigured('OPENAI_API_KEY'),
reason=get_skip_reason('OPENAI_API_KEY')
)(func)