Offline-first Pull Request review CLI tool using local LLM
A command-line tool that uses local Large Language Models (LLMs) to automatically review your code changes. Everything runs offline on your machineβno data leaves your computer, ensuring complete privacy and security.
PR Review CLI is a privacy-focused code review tool that leverages local LLMs to analyze git diffs and provide intelligent code review feedback. It integrates seamlessly with your development workflow, supporting both local usage and CI/CD pipelines.
- Offline-first: All analysis happens locally by defaultβno data transmission to external services
- AI-Powered Reviews: Uses local LLMs (Ollama, vLLM, llama.cpp) for intelligent code analysis
- Detailed Reports: Generates comprehensive reviews with severity levels, categories, and actionable suggestions
- Flexible Configuration: Customize LLM settings, review parameters, and output formats
- CI/CD Ready: Integrates with GitHub Actions and other CI/CD platforms
- Docker Support: Run in containers with pre-configured setups
- Multiple Formats: Output in JSON, Markdown, or terminal-friendly text
Reviews code changes between git branches using a local LLM. It analyzes diffs, finds potential issues (bugs, security vulnerabilities, code quality issues), and provides structured feedback in a format you can use in CI/CD or locally.
- Installation
- Quick start
- Configuration
- Commands
- Local UI (optional)
- LLM providers
- Docker
- CI/CD
- Security
- Troubleshooting
- Development
npm install -g pull-request-reviewer-aigit clone https://github.com/marcinkantyka/pull-request-reviewer-ai.git
cd pull-request-reviewer-ai
npm install
npm run buildAfter building, use node dist/cli/index.js instead of pr-review, or run npm link to install it globally.
You'll need a local LLM server running. Ollama is the easiest option:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a code review model
ollama pull deepseek-coder:6.7b# Review current branch against main
pr-review review --base main
# Compare two specific branches
pr-review compare feature-branch main
# Save results to a file
pr-review compare feature-branch main --format json --output review.jsonThat's all you need. The tool connects to your local LLM (defaults to http://localhost:11434) and analyzes your changes.
Here's what a typical review looks like:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Code Review Report
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Generated: 2026-02-01T23:17:28.119Z
Source Branch: fix/repo_rename
Target Branch: main
Model: qwen2.5-coder:7b
Duration: 30349ms
Summary
β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β
Files Reviewed: 8
Total Issues: 4
Issues by Severity:
Medium: 1
Info: 3
Score: 9.3/10
Issues by File
β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β β
File: .github/workflows/ci.yml (yaml) +3 -3
[INFO] [maintainability]:88
The Docker image name has been changed to 'pull-request-reviewer-ai:test' but the associated GitHub Secrets should also be updated for consistency.
Suggestion: Update ${ secrets.DOCKER_USERNAME } in .github/workflows/release.yml to match the new image name.
File: QUICKSTART.md (markdown) +2 -2
[INFO] [style]:14
The URL has been changed from 'pr-review-cli' to 'pull-request-reviewer-ai'. Ensure this change is intentional and that the repository name and description are updated accordingly.
Suggestion: Verify that the new repository name and description are accurate and update them if necessary.
File: examples/ci-integration.yml (yaml) +1 -1
[INFO] [maintainability]:35
The tool name has changed from 'pr-review-cli' to 'pull-request-reviewer-ai'. Ensure that the new tool is compatible with the existing workflow.
Suggestion: Verify compatibility and update any documentation if necessary.
File: src/cli/index.ts (typescript) +17 -1
[MEDIUM] [maintainability]:17
The version is hardcoded and falls back to '1.1.0' if the package.json cannot be read.
Suggestion: Consider using a default value or log an error message when the version cannot be determined.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The report includes:
- Summary: Overview of files reviewed, total issues, and score
- Issues by Severity: Breakdown of critical, high, medium, low, and info issues
- Issues by File: Detailed findings for each file with line numbers, severity, category, and suggestions
This tool reads configuration from (in order):
- CLI flag
--config /path/to/config.yml - Project root config files:
pr-review.config.json/pr-review.config.yaml/pr-review.config.yml/pr-review.config.js/pr-review.config.ts/.pr-reviewrc/.pr-reviewrc.json/.pr-reviewrc.yaml/.pr-reviewrc.yml/package.json(pr-reviewkey) - Environment variables (override file values)
To make runs repeatable and keep everything local:
- Set
temperature: 0 - Set
seedto a fixed number - Use
changeSummaryMode: deterministic - Keep
NETWORK_ALLOWED_HOSTSto localhost or internal Docker hostnames
Example pr-review.config.yml:
llm:
endpoint: 'http://localhost:11434' # Ollama default
provider: 'ollama' # Options: ollama, vllm, llamacpp, openai-compatible, mock
model: 'deepseek-coder:6.7b'
temperature: 0
topP: 1
timeout: 60000 # Milliseconds
maxTokens: 2048
apiKey: '' # Optional, for secured endpoints
seed: 42 # Deterministic outputs
retries: 3
retryDelay: 1000
network:
allowedHosts:
- 'localhost'
- '127.0.0.1'
- '::1'
review:
maxFiles: 50
maxLinesPerFile: 1000
excludePatterns:
- '*.lock'
- '*.min.js'
- '*.min.css'
- 'node_modules/**'
- 'dist/**'
- 'build/**'
- '.git/**'
includeAllFiles: false # Set true to ignore exclude patterns and size limits
changeSummaryMode: deterministic # deterministic | llm
projectContext: '' # Optional: domain rules or architecture notes to guide reviews
# Context-aware review options
contextAware: true # Enable multi-file context review
groupByDirectory: true # Group files in same directory
groupByFeature: true # Group files by feature/module
maxGroupSize: 5 # Maximum files per group
directoryDepth: 2 # Directory levels for grouping
concurrency: 3 # Parallel review groups
output:
defaultFormat: 'text'
colorize: true
showDiff: false
git:
diffContext: 3
maxDiffSize: 10485760 # 10MB
server:
host: '127.0.0.1'
port: 0 # 0 = random free portYou can also use environment variables instead of a config file:
export LLM_ENDPOINT=http://localhost:11434
export LLM_MODEL=deepseek-coder:6.7b
export LLM_PROVIDER=ollama
export LLM_API_KEY=your-key-here # Optional
export LLM_TIMEOUT=60000
export LLM_SEED=42 # Optional
export LLM_TOP_P=1
export LLM_MAX_TOKENS=2048
export REVIEW_PROJECT_CONTEXT="Explain domain constraints or key rules"
export NETWORK_ALLOWED_HOSTS=ollama,localhost,127.0.0.1,::1
export UI_HOST=127.0.0.1
export UI_PORT=47831Run pr-review config init to generate a default config file with all the available options.
review.projectContext (or REVIEW_PROJECT_CONTEXT) lets you inject short, project-specific rules that the model must respect. This reduces false positives and aligns findings with how your system really works.
Use it to document architecture constraints, infrastructure protections, or known tradeoffs. Example:
review:
projectContext: >
Do not flag exposed ports as security issues; the service is private behind an ALB.
Authentication is handled by the gateway, not in this repo.Or with environment variables:
export REVIEW_PROJECT_CONTEXT="Do not flag exposed ports as security issues; the service is private behind an ALB."Set deterministic output:
llm.temperature: 0llm.seed: <fixed number>llm.topP: 1review.changeSummaryMode: deterministic
Review every file (ignore filters):
review.includeAllFiles: true
Use a different config file:
pr-review review --config /path/to/config.yml --base main
Allow internal Docker hostnames:
- Add hostnames to
network.allowedHostsor setNETWORK_ALLOWED_HOSTS
Get help for any command:
pr-review --help # Show general help
pr-review review --help # Show help for review command
pr-review compare --help # Show help for compare command
pr-review config --help # Show help for config commandReview the current branch against a base branch:
pr-review review --base main
pr-review review --base develop --format json --output review.jsonCompare two specific branches:
pr-review compare feature-branch main
pr-review compare feature-branch main --severity high --max-files 20Manage configuration:
pr-review config init # Create default config file
pr-review config init --output custom.yml # Create config with custom name
pr-review config get llm.endpoint # Get specific config value
pr-review config list # List all configurationGlobal options:
--server- Start the local UI server--host <host>- UI server host (default: 127.0.0.1)--port <port>- UI server port (default: 47831; set 0 for random)
Common options available for both review and compare commands:
--repo-path <path>- Repository path (default: current working directory)--format <json|md|text>- Output format (default: text)--output <file>- Save to file instead of stdout--config <file>- Path to custom config file--severity <all|high|critical>- Filter by severity level (default: all)--files <pattern>- File pattern to review (glob pattern)--max-files <number>- Limit number of files to review--timeout <seconds>- LLM timeout in seconds (default: 60)--verbose- Show detailed logs--no-color- Disable colored output--exit-code- Exit with code 1 if issues found (useful for CI)
--base <branch>- Base branch to compare against (default: main)
Takes two required arguments:
<source-branch>- Source branch to review<target-branch>- Target branch to compare against
Start a local web UI without changing the default CLI workflow:
pr-review --serverDefault port is 47831. If that port is busy, the server falls back to a random free port. You can always force a random port with --port 0. The UI is local-only by default. --server cannot be combined with other commands.
In UI mode, only JSON/YAML config files are accepted.
The UI compares committed branch diffs. Uncommitted working tree changes are not included.
If you use Ollama or another local provider, the UI can load available models and detect local servers automatically.
You can also generate a full pr-review.config.yml template from the UI (Advanced panel).
- Run
pr-review --serverand open the URL printed in the console. - Pick your repository path (or use the current directory).
- Choose a base branch (default is
main). - (Optional) Load models and pick an LLM model.
- Click Run Review to generate results.
Tip: Use Advanced to set project-specific context and other overrides for a single run.
Works with any OpenAI-compatible API. Tested with:
- Ollama (recommended) -
http://localhost:11434 - vLLM -
http://localhost:8000 - llama.cpp server -
http://localhost:8080 - LM Studio / LocalAI -
http://localhost:1234
Set the provider and endpoint in your config or via environment variables. For OpenAI-compatible servers, use the base URL (the tool will call /v1/models and /v1/chat/completions).
The easiest way to get started with Docker:
cd docker
./start.shThe script handles everything for you:
- Downloads models if needed (only the first time)
- Starts services with secure internal network (no internet access)
- Blocks all outbound traffic for security
# Build
docker build -f docker/Dockerfile -t pull-request-reviewer-ai .
# Run standalone
docker run --rm \
-v $(pwd):/repos:ro \
-e LLM_ENDPOINT=http://host.docker.internal:11434 \
pull-request-reviewer-ai compare feature mainFor a complete setup with Ollama and secure network isolation:
cd docker
docker-compose up
# or
docker compose upThe compose setup uses internal: true network mode, which completely blocks internet access from containers. Models are automatically downloaded on first run via start.sh. Check out docker/README.md for more details.
See examples/ci-integration.yml for a complete example. Here's a basic setup:
name: Code Review
on:
pull_request:
branches: [main, develop]
jobs:
review:
runs-on: ubuntu-latest
services:
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
options: >-
--health-cmd "curl -f http://localhost:11434/api/tags || exit 1"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for git operations
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install PR Review CLI
run: npm install -g pull-request-reviewer-ai
- name: Wait for Ollama
run: |
until curl -f http://localhost:11434/api/tags; do
echo "Waiting for Ollama..."
sleep 2
done
- name: Pull Ollama model
run: |
curl http://localhost:11434/api/pull -d '{"name": "deepseek-coder:1.3b"}'
timeout-minutes: 10
- name: Run code review
run: |
pr-review compare ${{ github.head_ref }} ${{ github.base_ref }} \
--format json \
--output review.json \
--exit-code
env:
LLM_ENDPOINT: http://localhost:11434
LLM_MODEL: deepseek-coder:1.3b
#### Minimal offline-safe example
This version explicitly restricts outbound connections to the local Ollama service.
```yaml
name: Offline-Safe Review
on:
pull_request:
branches: [main]
jobs:
review:
runs-on: ubuntu-latest
services:
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm install -g pull-request-reviewer-ai
- run: pr-review compare ${{ github.head_ref }} ${{ github.base_ref }} --format md
env:
LLM_ENDPOINT: http://localhost:11434
LLM_MODEL: deepseek-coder:1.3b
NETWORK_ALLOWED_HOSTS: localhost,127.0.0.1,::1For a more complete example with PR comments, see examples/ci-integration.yml.
The tool only allows connections to localhost by default. If you try to connect to an external host, it will be blocked unless you explicitly allow it in NETWORK_ALLOWED_HOSTS. Everything runs locally on your machineβyour code never leaves your computer unless you configure a non-local endpoint.
The local UI server only accepts loopback connections, even if you bind to 0.0.0.0.
The local UI server only accepts loopback connections. Remote network requests are blocked even if you bind to 0.0.0.0.
LLM provider is not available
If you see this error, make sure your LLM server is actually running. For Ollama, try:
ollama serve
curl http://localhost:11434/api/tagsTimeout errors
If reviews are timing out, you can increase the timeout. Either use the CLI flag:
pr-review compare feature main --timeout 120Or set it in your config file (timeout is in milliseconds):
llm:
timeout: 120000 # 2 minutesNetwork security errors
The tool only allows localhost connections. Make sure your endpoint uses localhost, 127.0.0.1, or ::1. Any other hostname will be rejected.
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Lint
npm run lint
# Type check
npm run typecheckWe welcome contributions! Please see our Contributing Guide for details on how to contribute to this project.
MIT