An open-source AI-powered code review system inspired by CodeRabbit. Open Rabbit automatically reviews pull requests, provides actionable feedback, and learns from user interactions to improve over time.
- Automated PR Reviews: Automatically reviews pull requests when opened or updated
- Multi-Agent Architecture: Supervisor-orchestrated pipeline with specialized agents
- Knowledge Base Integration: Learns from user feedback to improve future reviews
- Multiple LLM Support: Works with OpenAI, Anthropic Claude, and OpenRouter
- Static Analysis: AST-based code parsing, security scanning, and complexity detection
- E2B Sandbox Execution: Isolated cloud environments for secure code analysis
- Web Search / Package Intelligence: Real-time search for breaking changes, deprecations, and CVEs
- Observability & Evaluations: Langfuse tracing with LLM-as-a-Judge and custom evaluators
- GitHub Integration: Seamless integration via GitHub App
Open Rabbit uses a multi-agent architecture for comprehensive code review:
| Component | Description |
|---|---|
| Bot | Probot-based GitHub App that handles PR events and webhook integration |
| Backend | FastAPI server with multi-agent orchestration system |
| Knowledge Base | Elasticsearch-powered semantic search for storing and retrieving learnings |
| Database | PostgreSQL for persistent storage and checkpointing |
| Redis | Task queue and caching layer |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SUPERVISOR AGENT β
β - Orchestrates review pipeline β
β - Manages agent coordination β
β - Aggregates and filters results β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β PARSER AGENT β β REVIEW AGENT β β UNIT TEST AGENT β
β β β β β β
β - AST Analysis β β - LLM Review β β - Test Gen β
β - Security Scan β β - KB Context β β - Coverage β
β - Complexity β β - Suggestions β β β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
The following diagram illustrates the complete workflow when a pull request is reviewed:
sequenceDiagram
autonumber
participant GH as GitHub
participant Bot as Bot Service
participant API as Backend API
participant Sup as Supervisor Agent
participant LF as Langfuse
participant KB as Knowledge Base
participant WebSearch as Web Search
participant Sandbox as E2B Sandbox
participant Parser as Parser Agent
participant Review as Review Agent
participant Test as Unit Test Agent
participant Agg as Result Aggregator
participant Eval as Evaluators
%% PR Event Trigger
GH->>Bot: Webhook (PR opened/updated)
Bot->>Bot: Validate payload
Bot->>API: POST /review (ReviewRequest)
API->>Sup: run(request, session_id)
%% Langfuse Trace Creation
Sup->>LF: Create trace (session, metadata)
LF-->>Sup: trace_id
%% Intent Parsing
Sup->>Sup: Parse intent from request
%% Sandbox Setup
Sup->>Sandbox: Create isolated environment
Sandbox-->>Sup: sandbox_id, repo_path
Sup->>Sandbox: Clone repository
Sandbox-->>Sup: Clone complete
%% Knowledge Base Fetch
Sup->>KB: Fetch learnings for PR context
KB-->>Sup: KBContext (past learnings, patterns)
%% Parser Agent Execution
Sup->>Parser: Analyze files (with sandbox access)
Parser->>Sandbox: Read file contents
Sandbox-->>Parser: File data
Parser->>Parser: AST analysis
Parser->>Parser: Security scanning
Parser->>Parser: Complexity detection
Parser-->>Sup: ParserOutput (issues, metrics)
%% Review Agent Execution
Sup->>Review: Review code (with KB context)
Review->>Review: LLM-based review
Review->>Review: Apply KB learnings
Review->>WebSearch: Check package breaking changes
WebSearch-->>Review: Package intelligence
Review-->>Sup: ReviewOutput (comments, suggestions)
%% Conditional: Unit Test Generation
alt Intent includes test generation
Sup->>Test: Generate unit tests
Test->>Sandbox: Analyze test coverage
Sandbox-->>Test: Coverage data
Test->>Test: Generate test code
Test-->>Sup: TestOutput (test files)
end
%% Result Aggregation
Sup->>Agg: Aggregate all results
Agg->>Agg: Merge parser findings
Agg->>Agg: Merge review comments
Agg->>Agg: Filter duplicates
Agg->>Agg: Prioritize issues
Agg-->>Sup: SupervisorOutput
%% Quality Evaluation
Sup->>Eval: Run evaluators on output
Eval->>Eval: Response quality check
Eval->>Eval: Code review quality check
Eval->>LF: Log scores to trace
Eval-->>Sup: EvalScores
%% Sandbox Cleanup
Sup->>Sandbox: Kill sandbox
Sandbox-->>Sup: Cleanup complete
%% Flush Traces
Sup->>LF: Flush trace data
%% Response Chain
Sup-->>API: SupervisorOutput
API-->>Bot: Review results
Bot->>GH: Post PR comments
Bot->>GH: Create review summary
When users react to review comments, Open Rabbit learns from the feedback:
sequenceDiagram
autonumber
participant User as Developer
participant GH as GitHub
participant Bot as Bot Service
participant API as Backend API
participant Feedback as Feedback Agent
participant KB as Knowledge Base
User->>GH: React to comment (thumbs up/down)
GH->>Bot: Webhook (issue_comment reaction)
Bot->>API: POST /feedback
API->>Feedback: Process feedback
Feedback->>Feedback: Extract context
Feedback->>Feedback: Determine sentiment
Feedback->>Feedback: Generate learning
alt Positive feedback
Feedback->>KB: Store as positive pattern
KB-->>Feedback: Learning stored
else Negative feedback
Feedback->>KB: Store as anti-pattern
KB-->>Feedback: Learning stored
end
Feedback-->>API: FeedbackResult
API-->>Bot: Acknowledgment
- Docker and Docker Compose
- Node.js 18+
- Python 3.11+ & UV
- GitHub App credentials
git clone https://github.com/JagjeevanAK/open-rabbit.git
cd open-rabbitdocker compose up -dThis starts:
- PostgreSQL (port 5432)
- Redis (port 6379)
- Elasticsearch (port 9200)
cd backend
cp .env.example .env
# Edit .env with your configuration
# Install dependencies
uv sync
# Run migrations
uv run alembic upgrade head
# Start the server
uv run uvicorn main:app --port 8080cd bot
cp .env.example .env
# Edit .env with your GitHub App credentials
# Install dependencies
npm install
# Start the bot
npm startcd knowledge-base
cp .env.example .env
# Edit .env with your OpenAI API key
# Install dependencies
uv sync
# Start the service
uv run uvicorn app:app --port 8000DATABASE_URL=postgresql://postgres:postgres@localhost:5432/openrabbit
REDIS_URL=redis://localhost:6379/0
LLM_PROVIDER=openai # openai, anthropic, openrouter
OPENAI_API_KEY=sk-...
KB_ENABLED=true
KNOWLEDGE_BASE_URL=http://localhost:8000APP_ID=your-github-app-id
PRIVATE_KEY_PATH=./private-key.pem
WEBHOOK_SECRET=your-webhook-secret
BACKEND_URL=http://localhost:8080OPENAI_API_KEY=sk-...
ELASTICSEARCH_URL=http://localhost:9200# Add to backend/.env
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_HOST=https://cloud.langfuse.com
LANGFUSE_ENABLED=trueOpen Rabbit automatically reviews PRs when:
- A new PR is opened
- New commits are pushed to a PR
Comment on a PR with:
/review
Comment on an issue with:
/create-unit-test
React to review comments to help Open Rabbit learn:
- π Helpful suggestion
- π Not helpful / false positive
- Reply with corrections for the AI to learn from
| Endpoint | Method | Description |
|---|---|---|
/bot/health |
GET | Health check |
/bot/review |
POST | Trigger manual review |
/bot/task-status/{id} |
GET | Get task status |
/bot/tasks |
GET | List all tasks |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/learnings |
POST | Add new learning |
/learnings/search |
GET | Search learnings |
/learnings/pr-context |
POST | Get PR-relevant learnings |
open-rabbit/
βββ backend/ # FastAPI backend server
β βββ agent/ # Multi-agent system
β β βββ supervisor/ # Orchestration layer
β β βββ subagents/ # Specialized agents
β β βββ evaluators/ # Quality evaluators (LLM-as-a-Judge + custom)
β β βββ schemas/ # Agent-specific Pydantic models
β β βββ services/ # External integrations
β βββ db/ # Database models & CRUD
β βββ schemas/ # Unified Pydantic schemas (API + DB)
β βββ routes/ # API endpoints
β βββ services/ # Business logic
βββ bot/ # Probot GitHub App
β βββ src/ # TypeScript source
βββ knowledge-base/ # Elasticsearch KB service
βββ public/ # Static assets
βββ docker-compose.yml # Infrastructure setup
# Backend tests
cd backend
uv run pytest
# Bot tests
cd bot
npm test# Backend
cd backend
uv run ruff check .
uv run ruff format .
# Bot
cd bot
npm run lint- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is open source and available under the MIT License.
- Inspired by CodeRabbit
- Built with Probot, FastAPI, and LangChain

