We're thrilled that you're interested in contributing to OpenAlgo! This guide will help you get started, whether you're fixing a bug, adding a new broker, improving documentation, or building new features.
Below you'll find everything you need to set up OpenAlgo on your computer and start contributing.
OpenAlgo is built by traders, for traders. We believe in democratizing algorithmic trading by providing a broker-agnostic, open-source platform that puts control back in the hands of traders. Every contribution, no matter how small, helps us achieve this mission.
- Technology Stack
- Development Setup
- Local Development
- Project Structure
- Development Workflow
- Contributing Guidelines
- Testing
- Adding a New Broker
- Frontend Development
- Documentation
- Best Practices
- Getting Help
OpenAlgo uses a Python Flask backend with a React 19 single-page application frontend.
- Python 3.12+ - Core programming language
- uv - Fast Python package manager (replaces pip/venv)
- Flask 3.1+ - Lightweight web framework
- Flask-RESTX - RESTful API with auto-generated Swagger documentation
- SQLAlchemy 2.0+ - Database ORM for data persistence
- Flask-SocketIO 5.6+ - Real-time WebSocket connections for live updates
- Flask-Login - User session management and authentication
- Flask-WTF - Form validation and CSRF protection
- Ruff - Fast Python linter and formatter
- React 19 - Component-based UI library
- TypeScript 5.9+ - Type-safe JavaScript
- Vite 7+ - Fast build tool and dev server
- TailwindCSS 4 - Utility-first CSS framework
- shadcn/ui (Radix UI) - Accessible component primitives
- TanStack Query 5 - Server state management
- Zustand 5 - Client state management
- React Router 7 - Client-side routing
- Plotly.js / Lightweight Charts - Data visualization
- Socket.IO Client - Real-time communication
- Biome.js - Fast linter and formatter
- Vitest - Unit testing framework
- Playwright - End-to-end testing
- pandas 2.3+ - Data manipulation and analysis
- numpy 2.0+ - Numerical computing
- DuckDB - Historical market data storage
- httpx - Modern HTTP client with HTTP/2 support
- websockets 15.0+ - WebSocket client and server
- pyzmq - ZeroMQ for high-performance message queue
- APScheduler - Background task scheduling
- scipy / py_vollib / numba - Options analytics and Greeks
- argon2-cffi - Secure password hashing
- cryptography - Token encryption
- Flask-Limiter - Rate limiting
- Flask-CORS - CORS protection
Important
You will need Python 3.12+, Node.js 20/22/24, and the uv package manager.
Before you begin, make sure you have the following installed:
- Python 3.12+ - Download Python
- Node.js 20, 22, or 24 - Download Node.js
- Git - Download Git
- Code Editor - VS Code recommended with extensions:
- Python
- Pylance
- Biome
- Tailwind CSS IntelliSense
- Basic Knowledge of Flask and React
# Clone the repository
git clone https://github.com/marketcalls/openalgo.git
cd openalgo
# Install uv package manager (if not already installed)
pip install uv
# Sync Python dependencies (uv handles virtualenv automatically)
uv sync
# Build React frontend (required before first run)
cd frontend
npm install
npm run build
cd ..Important
Always use uv run to run Python commands. Never use global Python or manually manage virtual environments. The uv tool automatically creates and manages a .venv for the project.
# Copy the sample environment file
cp .sample.env .env
# Generate secure random keys for APP_KEY and API_KEY_PEPPER:
uv run python -c "import secrets; print(secrets.token_hex(32))"
# Edit .env and update:
# 1. APP_KEY (paste generated key)
# 2. API_KEY_PEPPER (paste another generated key)
# 3. VALID_BROKERS (comma-separated list of brokers to enable)
# 4. Broker API credentialsNote
Static IP whitelisting: Many Indian brokers require you to whitelist a static IP address when generating API keys and secrets. If you are developing locally, you may need to whitelist your public IP. For cloud/VPS deployments, use the server's static IP. Check your broker's API documentation for specific requirements.
# Development mode (auto-reloads on backend code changes)
uv run app.py
# Application will be available at http://127.0.0.1:5000For the best development experience when working on the frontend, use two terminals:
Terminal 1 - React Dev Server (hot reload):
cd frontend
npm run dev
# Frontend dev server at http://localhost:5173 with hot module replacementTerminal 2 - Flask Backend:
uv run app.py
# Backend API at http://127.0.0.1:5000Note: The React dev server proxies API requests to the Flask backend. For production testing, build the frontend with
npm run buildand access everything through Flask at port 5000.
# Run with Gunicorn
uv run gunicorn --worker-class eventlet -w 1 app:app
# IMPORTANT: Use -w 1 (one worker) for WebSocket compatibility- Access the application: Navigate to
http://127.0.0.1:5000 - Setup account: Go to
http://127.0.0.1:5000/setup - Create admin user: Fill in the setup form
- Login: Use your credentials to access the dashboard
- Configure broker: Navigate to Settings and set up your broker
- Main app: http://127.0.0.1:5000
- React frontend: http://127.0.0.1:5000/react
- Swagger API docs: http://127.0.0.1:5000/api/docs
- API Analyzer: http://127.0.0.1:5000/analyzer
Understanding the codebase structure will help you contribute effectively:
openalgo/
├── app.py # Main Flask application entry point
├── pyproject.toml # Python dependencies & tool config (uv/ruff/pytest)
├── frontend/ # React 19 SPA (TypeScript + Vite)
│ ├── src/
│ │ ├── components/ # React components (shadcn/ui based)
│ │ ├── pages/ # Route-level page components
│ │ ├── hooks/ # Custom React hooks
│ │ ├── api/ # API client functions
│ │ ├── stores/ # Zustand state stores
│ │ ├── lib/ # Utility functions
│ │ └── App.tsx # Root component with routing
│ ├── package.json # Node.js dependencies
│ ├── biome.json # Biome linter/formatter config
│ ├── tsconfig.json # TypeScript configuration
│ ├── vite.config.ts # Vite build configuration
│ └── dist/ # Production build output (gitignored)
├── blueprints/ # Flask blueprints for web routes
│ ├── auth.py # Authentication routes
│ ├── react_app.py # Serves React SPA from frontend/dist/
│ └── ...
├── broker/ # Broker integrations (24+ brokers)
│ ├── zerodha/ # Reference implementation
│ ├── dhan/ # Modern API design
│ ├── angel/ # AngelOne integration
│ └── .../ # Each broker follows standardized structure
├── restx_api/ # REST API endpoints (/api/v1/)
├── services/ # Business logic layer
├── database/ # SQLAlchemy models and database utilities
├── utils/ # Shared utilities and helpers
├── websocket_proxy/ # Unified WebSocket server (port 8765)
├── test/ # Python test files
├── strategies/ # Trading strategy examples
├── db/ # SQLite/DuckDB database files
└── .env # Environment config (create from .sample.env)
frontend/: React 19 SPA with TypeScript, built with Vite and served by Flask viablueprints/react_app.pybroker/: Each subdirectory contains a complete broker integration withapi/,database/,mapping/,streaming/, andplugin.jsonrestx_api/: RESTful API endpoints with automatic Swagger documentation at/api/docsblueprints/: Flask route handlers for UI pages and webhooksservices/: Business logic separated from route handlerswebsocket_proxy/: Real-time market data streaming via unified WebSocket proxydatabase/: 5 separate databases for isolation (main, logs, latency, sandbox, historify)
# Fork the repository on GitHub (click Fork button)
# Clone your fork
git clone https://github.com/YOUR_USERNAME/openalgo.git
cd openalgo
# Add upstream remote
git remote add upstream https://github.com/marketcalls/openalgo.git
# Verify remotes
git remote -vImportant: Disable GitHub Actions on Your Fork
After forking, go to your fork's Settings → Actions → General (
https://github.com/YOUR_USERNAME/openalgo/settings/actions) and select "Disable actions" under Actions permissions. This prevents CI workflows (frontend builds, Docker pushes) from running on your fork unnecessarily — those workflows are only meant to run on the upstream repository.
The /frontend/dist directory is gitignored and not tracked in the repository. CI automatically builds the frontend when changes are merged to main.
How it works:
- PRs are tested with a fresh frontend build (but not committed)
- When merged to main, CI automatically:
- Builds the frontend (
cd frontend && npm run build) - Pushes Docker image to Docker Hub
- Builds the frontend (
For Contributors:
- Build locally for development:
cd frontend && npm install && npm run build - Do NOT commit
frontend/dist/— it is gitignored - Focus on source code changes — CI handles production builds
# Update your main branch
git checkout main
git pull upstream main
# Create a new branch for your feature
# Branch naming convention:
# - feature/feature-name : New features
# - bugfix/bug-name : Bug fixes
# - docs/doc-name : Documentation
# - refactor/refactor-name : Code refactoring
git checkout -b feature/your-feature-nameFollow these guidelines while developing:
- Follow PEP 8 style guide
- Use 4 spaces for indentation
- Maximum 100 characters line length (configured in Ruff)
- Imports: Standard library → Third-party → Local
- Use Google-style docstrings
Run the linter:
# Check Python code
uv run ruff check .
# Auto-fix issues
uv run ruff check --fix .
# Format code
uv run ruff format .- Follow Biome.js rules (configured in
frontend/biome.json) - Use functional components with hooks
- Component files use PascalCase:
MyComponent.tsx - Use TanStack Query for server state, Zustand for client state
Run the linter:
cd frontend
# Lint code
npm run lint
# Format code
npm run format
# Lint + format in one command
npm run checkWe follow Conventional Commits specification:
feat:- New featuresfix:- Bug fixesdocs:- Documentation changesstyle:- Code style changes (formatting, no logic change)refactor:- Code refactoringtest:- Adding or updating testschore:- Maintenance tasks
Examples:
git commit -m "feat: add Groww broker integration"
git commit -m "fix: correct margin calculation for options"
git commit -m "docs: update WebSocket setup instructions"
git commit -m "refactor: optimize order processing pipeline"# Run Python tests
uv run pytest test/ -v
# Run React tests
cd frontend
npm test
# Run end-to-end tests
npm run e2e
# Manual testing:
# 1. Web UI: http://127.0.0.1:5000
# 2. React UI: http://127.0.0.1:5000/react
# 3. API Docs: http://127.0.0.1:5000/api/docs
# 4. API Analyzer: http://127.0.0.1:5000/analyzer- Application starts without errors (
uv run app.py) - All existing features still work
- New feature works as expected
- Python tests pass (
uv run pytest test/ -v) - Frontend tests pass (
cd frontend && npm test) - No TypeScript errors (
cd frontend && npm run build) - No linting errors (Ruff for Python, Biome for frontend)
- API endpoints return correct responses
- WebSocket connections work (if applicable)
# Add your changes
git add .
# Commit with conventional commit message
git commit -m "feat: add your feature description"
# Push to your fork
git push origin feature/your-feature-name- Go to your fork on GitHub
- Click "Compare & pull request"
- Fill out the PR template:
- Title: Clear, descriptive title
- Description: What does this PR do?
- Related Issues: Link related issues (e.g., "Closes #123")
- Screenshots: For UI changes, include before/after screenshots
- Testing: Describe how you tested the changes
- Checklist: Complete the PR checklist
OpenAlgo follows a strict incremental contribution standard. We require all contributions to be submitted as:
- One feature per pull request, OR
- One fix per pull request
Why this matters:
OpenAlgo supports a growing list of brokers, and every change must be validated across this broad surface area. Large integrations submitted in a single PR require extensive manual testing and verification that is not practical for the maintainers to review all at once.
Additionally, many contributions today are developed with AI assistance, which can accelerate development substantially but also increases the need for careful human review, testing, and incremental verification before acceptance into a shared upstream project.
What this means in practice:
- Break large features into small, self-contained pull requests
- Each PR should be independently reviewable and testable
- Submit them sequentially — wait for one to be reviewed before sending the next
- Large monolithic PRs or full-project integrations will not be accepted in their current form
- Exception — New broker integrations may be submitted as a single PR since they are self-contained within their own
broker/directory and don't modify core platform code
If you have a large integration or project built on OpenAlgo:
We appreciate and encourage projects built on top of OpenAlgo (it's why we're open-source!). However, we cannot merge large codebases as a single contribution. Instead, extract individual improvements, fixes, or self-contained features and submit them separately. This gives each contribution a much better chance of being reviewed and accepted.
Great ways to get started:
-
Documentation
- Fix typos in README or docs
- Improve installation instructions
- Add examples and tutorials
-
Bug Fixes
- Check issues labeled "good first issue"
- Fix minor bugs and edge cases
- Improve error messages
-
UI Improvements
- Enhance React components
- Improve mobile responsiveness
- Add loading states and animations
- Fix layout issues
-
Examples
- Add strategy examples in
/strategies - Create tutorial notebooks
- Document common use cases
- Add strategy examples in
More advanced contributions:
-
New Broker Integration
- Add support for new brokers
- Complete implementation guide in next section
- Requires understanding of broker APIs
-
API Endpoints
- Implement new trading features
- Enhance existing endpoints
- Add new data sources
-
React Frontend Features
- Build new pages or components
- Add data visualizations with Plotly/Lightweight Charts
- Improve real-time updates via Socket.IO
-
Performance Optimization
- Optimize database queries
- Improve caching strategies
- Reduce API latency
-
WebSocket Features
- Add new streaming capabilities
- Improve real-time performance
- Add broker WebSocket adapters
-
Testing
- Write Vitest unit tests for React components
- Write Playwright end-to-end tests
- Write pytest tests for backend services
- Improve test coverage
-
Security Enhancements
- Audit security vulnerabilities
- Improve authentication
- Enhance encryption
# Run all tests
uv run pytest test/ -v
# Run specific test file
uv run pytest test/test_broker.py -v
# Run single test function
uv run pytest test/test_broker.py::test_function_name -v
# Run tests with coverage
uv run pytest test/ --covcd frontend
# Run unit tests (watch mode)
npm test
# Run tests once
npm run test:run
# Run tests with coverage
npm run test:coverage
# Run accessibility tests
npm run test:a11y
# Run end-to-end tests (Playwright)
npm run e2e
# Run e2e tests with UI
npm run e2e:ui# test/test_feature.py
import pytest
def test_feature():
"""Test your feature here."""
result = some_function()
assert result == expected_value// frontend/src/components/__tests__/MyComponent.test.tsx
import { render, screen } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import { MyComponent } from '../MyComponent';
describe('MyComponent', () => {
it('renders correctly', () => {
render(<MyComponent />);
expect(screen.getByText('Expected Text')).toBeInTheDocument();
});
});// frontend/e2e/my-feature.spec.ts
import { test, expect } from '@playwright/test';
test('feature works end to end', async ({ page }) => {
await page.goto('/react');
await expect(page.getByText('Dashboard')).toBeVisible();
});One of the most valuable contributions is adding support for new brokers. Here's a comprehensive guide:
Create a new directory under /broker/your_broker_name/:
broker/your_broker_name/
├── api/
│ ├── auth_api.py # Authentication and session management
│ ├── order_api.py # Order placement, modification, cancellation
│ ├── data.py # Market data, quotes, historical data
│ └── funds.py # Account balance and margin
├── database/
│ └── master_contract_db.py # Symbol master contract management
├── mapping/
│ ├── order_data.py # Transform OpenAlgo format to broker format
│ └── transform_data.py # General data transformations
├── streaming/
│ └── broker_adapter.py # WebSocket adapter for live data
└── plugin.json # Broker configuration metadata
"""Authentication module for BrokerName."""
def authenticate_broker(data):
"""Authenticate user with broker.
Args:
data (dict): Authentication credentials
Returns:
dict: Authentication response with status and token
"""
pass
def get_auth_token():
"""Retrieve stored authentication token.
Returns:
str: Active auth token or None
"""
pass"""Order management module for BrokerName."""
def place_order_api(data):
"""Place a new order with the broker."""
pass
def modify_order_api(data):
"""Modify an existing order."""
pass
def cancel_order_api(order_id):
"""Cancel an order."""
pass
def get_order_book():
"""Get all orders for the day."""
pass
def get_trade_book():
"""Get all executed trades."""
pass
def get_positions():
"""Get current open positions."""
pass
def get_holdings():
"""Get demat holdings."""
pass"""Market data module for BrokerName."""
def get_quotes(symbols):
"""Get real-time quotes for symbols."""
pass
def get_market_depth(symbol):
"""Get market depth/order book."""
pass
def get_historical_data(symbol, interval, start_date, end_date):
"""Get historical OHLC data."""
pass{
"broker_name": "brokername",
"display_name": "Broker Name",
"version": "1.0.0",
"auth_type": "oauth2",
"api_base_url": "https://api.broker.com",
"features": {
"place_order": true,
"modify_order": true,
"cancel_order": true,
"websocket": true,
"market_depth": true,
"historical_data": true
}
}- Add broker to
VALID_BROKERSin.env - Configure broker credentials in
.env - Test authentication flow
- Test each API endpoint via Swagger UI at
/api/docs - Test WebSocket streaming (if supported)
- Validate error handling
Study existing broker implementations:
/broker/zerodha/- Most complete implementation/broker/dhan/- Modern API design/broker/angel/- WebSocket streaming
The frontend is a React 19 SPA located in /frontend/. It is built with Vite and served by Flask in production via blueprints/react_app.py.
cd frontend
# Start Vite dev server with hot reload
npm run dev
# Available at http://localhost:5173
# Build for production
npm run build
# Output goes to frontend/dist/OpenAlgo uses shadcn/ui built on Radix UI primitives with Tailwind CSS:
import { Button } from '@/components/ui/button';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
function PortfolioCard() {
return (
<Card>
<CardHeader>
<CardTitle>Portfolio Value</CardTitle>
</CardHeader>
<CardContent>
<p className="text-2xl font-bold">₹1,25,000</p>
</CardContent>
</Card>
);
}import { useQuery } from '@tanstack/react-query';
function Positions() {
const { data, isLoading, error } = useQuery({
queryKey: ['positions'],
queryFn: () => api.getPositions(),
});
if (isLoading) return <div>Loading...</div>;
// render positions...
}import { create } from 'zustand';
interface AppState {
selectedBroker: string;
setSelectedBroker: (broker: string) => void;
}
const useAppStore = create<AppState>((set) => ({
selectedBroker: '',
setSelectedBroker: (broker) => set({ selectedBroker: broker }),
}));Use Tailwind utility classes directly. Always use responsive and theme-aware patterns:
{/* Responsive grid */}
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4">
<div>Column 1</div>
<div>Column 2</div>
<div>Column 3</div>
</div>
{/* Use CSS variables for theme colors — adapts to light/dark mode */}
<div className="bg-background text-foreground">
Automatically adapts to theme
</div>cd frontend
# Lint
npm run lint
# Format
npm run format
# Both (with auto-fix)
npm run check-
Python Docstrings - Use Google-style:
def place_order(symbol, quantity, price, order_type): """Place a trading order. Args: symbol (str): Trading symbol in OpenAlgo format quantity (int): Number of shares/contracts price (float): Order price (0 for market orders) order_type (str): Order type ('MARKET', 'LIMIT', 'SL') Returns: dict: Order response with order_id and status Raises: ValueError: If invalid order_type provided """ pass
-
TypeScript - Use JSDoc where types alone aren't sufficient:
/** * Fetches positions for the current user. * Requires active broker authentication. */ async function getPositions(): Promise<Position[]> { // ... }
-
API Documentation - Use Flask-RESTX decorators:
@api.route('/placeorder') class PlaceOrder(Resource): @api.doc(description='Place a new order') @api.expect(order_model) @api.marshal_with(order_response_model) def post(self): """Place a trading order.""" pass
-
Never commit sensitive data
# Bad - Never do this! API_KEY = 'abc123xyz' # Good - Use environment variables import os API_KEY = os.getenv('BROKER_API_KEY')
-
Validate all inputs at system boundaries
def place_order(data): if data.get('quantity', 0) <= 0: raise ValueError('Quantity must be positive') valid_types = ['MARKET', 'LIMIT', 'SL', 'SLM'] if data.get('order_type') not in valid_types: raise ValueError('Invalid order type')
-
Use parameterized queries (SQLAlchemy ORM)
# Bad - SQL injection vulnerability! query = f"SELECT * FROM orders WHERE user_id = {user_id}" # Good - SQLAlchemy ORM orders = Order.query.filter_by(user_id=user_id).all()
-
Follow OWASP guidelines
- Enable CSRF protection (already configured)
- Use HTTPS in production
- Rate limiting is configured per endpoint
- Sanitize user inputs
-
Optimize database queries
# Bad - N+1 query problem for user in users: orders = Order.query.filter_by(user_id=user.id).all() # Good - Use eager loading from sqlalchemy.orm import joinedload users = User.query.options(joinedload(User.orders)).all()
-
Use caching
from cachetools import TTLCache symbol_cache = TTLCache(maxsize=1000, ttl=300) def get_symbol_info(symbol): if symbol in symbol_cache: return symbol_cache[symbol] info = fetch_symbol_from_db(symbol) symbol_cache[symbol] = info return info
-
Minimize API calls — use batch endpoints
# Bad - Multiple API calls for symbol in symbols: quote = broker.get_quote(symbol) # Good - Batch API call quotes = broker.get_quotes_batch(symbols)
-
Write self-documenting code
# Bad def calc(s, q, p): return s * q * p * 0.1 # Good def calculate_order_value(symbol_price, quantity, price, multiplier): return symbol_price * quantity * price * multiplier
-
Keep functions small and focused
-
Return consistent JSON responses from API endpoints
return { 'status': 'success' | 'error', 'message': 'Human-readable message', 'data': {...} # Optional payload }
# Ensure correct Node.js version (20, 22, or 24)
node --version
# Clean install
cd frontend
rm -rf node_modules
npm install
npm run build
# Check for TypeScript errors
npx tsc --noEmit# Sync dependencies with uv
uv sync
# If issues persist, recreate the environment
rm -rf .venv
uv sync# Check WebSocket configuration in .env:
WEBSOCKET_HOST='127.0.0.1'
WEBSOCKET_PORT='8765'
# Ensure only one worker with Gunicorn:
uv run gunicorn --worker-class eventlet -w 1 app:app
# Check firewall settings for port 8765# SQLite doesn't handle high concurrency well
# Close all connections and restart the app
uv run app.py- Discord: Join our Discord server for real-time help
- GitHub Discussions: Ask questions in GitHub Discussions
- Documentation: Check docs.openalgo.in
- GitHub Issues: Report bugs in Issues
- Search existing issues — your question might already be answered
- Check documentation — review docs at docs.openalgo.in
- Review error logs — include error messages when asking for help
- Provide context — share your environment (OS, Python version, Node version, broker)
When asking for help, include:
- Clear description of the problem
- Steps to reproduce the issue
- Expected behavior vs actual behavior
- Error messages (full stack trace)
- Environment details:
- OS and version
- Python version (
python --version) - Node.js version (
node --version) - OpenAlgo version
- Broker being used
After submitting your pull request:
-
Automated Checks
- CI will build the frontend and run linting
- Ensure all checks pass before requesting review
-
Review Feedback
- Address reviewer comments promptly
- Ask questions if feedback is unclear
- Make requested changes in new commits
-
Updates
- Push additional commits to your branch
- No need to create a new PR
-
Approval & Merge
- Once approved, maintainers will merge
- CI will automatically build the frontend for production
-
Be Patient
- Reviews may take a few days
- Maintainers are volunteers
- Ping politely if no response after a week
We value all contributions! Contributors will be:
- Listed in contributors section on GitHub
- Mentioned in release notes for significant contributions
- Part of the OpenAlgo community on Discord
- Be Respectful - Treat everyone with respect
- Be Constructive - Provide helpful feedback
- Be Patient - Remember everyone is learning
- Be Inclusive - Welcome contributors of all skill levels
- Be Professional - Keep discussions focused on code
- Repository: github.com/marketcalls/openalgo
- Issue Tracker: github.com/marketcalls/openalgo/issues
- Documentation: docs.openalgo.in
- Discord: discord.com/invite/UPh7QPsNhP
- PyPI Package: pypi.org/project/openalgo
- YouTube: youtube.com/@openalgoHQ
- Twitter/X: @openalgoHQ
OpenAlgo is released under the AGPL v3.0 License. See the LICENSE file for details.
By contributing to OpenAlgo, you agree that your contributions will be licensed under the AGPL v3.0 License.
Thank you for contributing to OpenAlgo! Your efforts help democratize algorithmic trading and empower traders worldwide. Every line of code, documentation improvement, and bug report makes a difference.
Happy coding, and welcome to the OpenAlgo community!
Built by traders, for traders — making algo trading accessible to everyone.