Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# QuantResearch
# QuantResearch_Opcode — Updated README

check out the [link](https://qrsopcode.netlify.app/)

> **QuantResearch** — research-grade quantitative strategy starter kit with an interactive React/TypeScript frontend (cauweb), Python backtesting core, and legacy Streamlit dashboards archived under `legacy/streamlit/`.
> **QuantResearch_Opcode** — research-grade quantitative strategy starter kit with an interactive React/TypeScript frontend (cauweb), Python backtesting core, and legacy Streamlit dashboards archived under `legacy/streamlit/`.

---

Expand Down Expand Up @@ -210,7 +208,7 @@ The frontend expects a stable WS message contract. A suggested minimal schema (e

---

## APIs & Data flows: to be updated
## APIs & Data flows

* **Backtest flow**: Frontend POSTs to `/api/backtest` with strategy config → backend enqueues job → backend emits progress to `backtest:{job_id}` → final results stored in DB and accessible via `/api/backtest/{job_id}/results`.
* **Strategy CRUD**: REST endpoints for strategy create / update / delete, with validation in Python core.
Expand Down
227 changes: 227 additions & 0 deletions legacy/streamlit/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
# QuantResearchStarter

[![Python Version](https://img.shields.io/badge/python-3.10%2B-blue)](https://www.python.org/)
[![License: MIT](https://img.shields.io/badge/license-MIT-green)](LICENSE)
[![CI](https://github.com/username/QuantResearchStarter/actions/workflows/ci.yml/badge.svg)](https://github.com/username/QuantResearchStarter/actions)

A modular, open-source quantitative research and backtesting framework built for clarity, reproducibility, and extensibility. Ideal for researchers, students, and engineers building and testing systematic strategies.

---

## Why this project

QuantResearchStarter aims to provide a clean, well-documented starting point for quantitative research and backtesting. It focuses on:

* **Readability**: idiomatic Python, type hints, and small modules you can read and change quickly.
* **Testability**: deterministic vectorized backtests with unit tests and CI.
* **Extensibility**: plug-in friendly factor & data adapters so you can try new ideas fast.

---

## Key features

* **Data management** — download market data or generate synthetic price series for experiments.
* **Factor library** — example implementations of momentum, value, size, and volatility factors.
* **Vectorized backtesting engine** — supports transaction costs, slippage, portfolio constraints, and configurable rebalancing frequencies (daily, weekly, monthly).
* **Risk & performance analytics** — returns, drawdowns, Sharpe, turnover, and other risk metrics.
* **CLI & scripts** — small tools to generate data, compute factors, and run backtests from the terminal.
* **Production-ready utilities** — type hints, tests, continuous integration, and documentation scaffolding.

---

## Quick start

### Requirements

* Python 3.10+
* pip

### Install locally

```bash
# Clone the repository
git clone https://github.com/username/QuantResearchStarter.git
cd QuantResearchStarter

# Install package in development mode
pip install -e .

# Install development dependencies (tests, linters, docs)
pip install -e ".[dev]"

# Optional UI dependencies
pip install streamlit plotly
```

### Quick CLI Usage

After installation, you can use the CLI in two ways:

**Option 1: Direct command (if PATH is configured)**
```bash
qrs --help
# generate synthetic sample price series
qrs generate-data -o data_sample/sample_prices.csv -s 5 -d 365
# compute example factors
qrs compute-factors -d data_sample/sample_prices.csv -f momentum -f value -o output/factors.csv
# run a backtest
qrs backtest -d data_sample/sample_prices.csv -s output/factors.csv -o output/backtest_results.json
```

**Option 2: Python module (always works)**
```bash
python -m quant_research_starter.cli --help
python -m quant_research_starter.cli generate-data -o data_sample/sample_prices.csv -s 5 -d 365
python -m quant_research_starter.cli compute-factors -d data_sample/sample_prices.csv -f momentum -f value
python -m quant_research_starter.cli backtest -d data_sample/sample_prices.csv -s output/factors.csv -o output/backtest_results.json
```

### Demo (one-line)

```bash
make demo
```

### Step-by-step demo

```bash
# generate synthetic sample price series
python -m quant_research_starter.cli generate-data -o data_sample/sample_prices.csv -s 5 -d 365

# compute example factors
python -m quant_research_starter.cli compute-factors -d data_sample/sample_prices.csv -f momentum -f value -o output/factors.csv

# run a backtest
python -m quant_research_starter.cli backtest -d data_sample/sample_prices.csv -s output/factors.csv -o output/backtest_results.json

# DISCLAIMER: OLD VERSION
# optional: start the Streamlit dashboard, if on main stream
streamlit run src/quant_research_starter/dashboard/streamlit_app.py
# NEW VERSION: if streamlit is in legacy folder
streamlit run legacy/streamlit/streamlit_app.py
```

---

## Example: small strategy (concept)

```python
from quant_research_starter.backtest import Backtester
from quant_research_starter.data import load_prices
from quant_research_starter.factors import Momentum

prices = load_prices("data_sample/sample_prices.csv")
factor = Momentum(window=63)
scores = factor.compute(prices)

bt = Backtester(prices, signals=scores, capital=1_000_000)
results = bt.run()
print(results.performance.summary())
```

### Rebalancing Frequency

The backtester supports different rebalancing frequencies to match your strategy needs:

```python
from quant_research_starter.backtest import VectorizedBacktest
# Daily rebalancing (default)
bt_daily = VectorizedBacktest(prices, signals, rebalance_freq="D")

# Weekly rebalancing (reduces turnover and transaction costs)
bt_weekly = VectorizedBacktest(prices, signals, rebalance_freq="W")

# Monthly rebalancing (lowest turnover)
bt_monthly = VectorizedBacktest(prices, signals, rebalance_freq="M")

results = bt_monthly.run()
```

Supported frequencies:
- `"D"`: Daily rebalancing (default)
- `"W"`: Weekly rebalancing (rebalances when the week changes)
- `"M"`: Monthly rebalancing (rebalances when the month changes)

> The code above is illustrative—see `examples/` for fully working notebooks and scripts.

---

## CLI reference

Run `python -m quant_research_starter.cli --help` or `python -m quant_research_starter.cli <command> --help` for full usage. Main commands include:

* `python -m quant_research_starter.cli generate-data` — create synthetic price series or download data from adapters
* `python -m quant_research_starter.cli compute-factors` — calculate and export factor scores
* `python -m quant_research_starter.cli backtest` — run the vectorized backtest and export results

**Note:** If you have the `qrs` command in your PATH, you can use `qrs` instead of `python -m quant_research_starter.cli`.

---

## Project structure (overview)

```
QuantResearchStarter/
├─ src/quant_research_starter/
│ ├─ data/ # data loaders & adapters
│ ├─ factors/ # factor implementations
│ ├─ backtest/ # backtester & portfolio logic
│ ├─ analytics/ # performance and risk metrics
│ ├─ cli/ # command line entry points
│ └─ dashboard/ # optional Streamlit dashboard
├─ examples/ # runnable notebooks & example strategies
├─ tests/ # unit + integration tests
└─ docs/ # documentation source
```

---

## Tests & CI

We include unit tests and a CI workflow (GitHub Actions). Run tests locally with:

```bash
pytest -q
```

The CI pipeline runs linting, unit tests, and builds docs on push/PR.

---

## Contributing

Contributions are very welcome. Please follow these steps:

1. Fork the repository
2. Create a feature branch
3. Add tests for new behavior
4. Open a pull request with a clear description and rationale

Please review `CONTRIBUTING.md` and the `CODE_OF_CONDUCT.md` before submitting.

---

## AI policy — short & practical

**Yes — you are allowed to use AI tools** (ChatGPT, Copilot, Codeium, etc.) to help develop, prototype, or document code in this repository.

A few friendly guidelines:

* **Be transparent** when a contribution is substantially generated by an AI assistant — add a short note in the PR or commit message (e.g., "Generated with ChatGPT; reviewed and adapted by <your-name>").
* **Review and test** all AI-generated code. Treat it as a helpful draft, not final production-quality code.
* **Follow licensing** and attribution rules for any external snippets the AI suggests. Don’t paste large verbatim copyrighted material.
* **Security & correctness**: double-check numerical logic, data handling, and anything that affects trading decisions.

This policy is intentionally permissive: we want the community to move fast while keeping quality and safety in mind.

---

## License

This project is licensed under the MIT License — see the `LICENSE` file for details.

---

## Acknowledgements

Built with inspiration from open-source quant libraries and the research community. If you use this project in papers or public work, a short citation or mention is appreciated.
Binary file not shown.
85 changes: 85 additions & 0 deletions src/quant_research_starter/api/alembic/env.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
from __future__ import with_statement
import os
from logging.config import fileConfig

from sqlalchemy import engine_from_config
from sqlalchemy import pool

from alembic import context

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging. Some environments (CI or
# trimmed alembic.ini) may not include all logger sections; guard against
# that to avoid stopping migrations with a KeyError.
if config.config_file_name is not None:
try:
fileConfig(config.config_file_name)
except Exception:
# Fall back to a minimal logging configuration if the ini is missing
# expected logger sections (e.g. 'logger_sqlalchemy'). This makes
# migrations resilient when run in different environments.
import logging

logging.basicConfig(level=logging.INFO)

target_metadata = None

# Use DATABASE_URL env if provided
db_url = os.getenv("DATABASE_URL") or config.get_main_option("sqlalchemy.url")
if db_url:
# Only set the option if we have a valid string value. Avoid setting None
# which causes ConfigParser type errors (option values must be strings).
config.set_main_option("sqlalchemy.url", str(db_url))


def run_migrations_offline():
url = config.get_main_option("sqlalchemy.url")
context.configure(url=url, target_metadata=target_metadata, literal_binds=True)

with context.begin_transaction():
context.run_migrations()


def run_migrations_online():
# Determine whether the configured URL uses an async driver. If so,
# create an AsyncEngine and run the migrations inside an async context
# while delegating the actual migration steps to a sync callable via
# `connection.run_sync`. Otherwise, fall back to the classic sync path.
url = config.get_main_option("sqlalchemy.url")

def _do_run_migrations(connection):
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()

if url and url.startswith("postgresql+asyncpg"):
# Async migration path
from sqlalchemy.ext.asyncio import create_async_engine
import asyncio

async_engine = create_async_engine(url, future=True)

async def run():
async with async_engine.connect() as connection:
await connection.run_sync(_do_run_migrations)

asyncio.run(run())
else:
# Sync migration path (classic)
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)

with connectable.connect() as connection:
_do_run_migrations(connection)


if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
"""initial create users and backtest_jobs tables

Revision ID: 0001_initial_create_users_and_jobs
Revises:
Create Date: 2025-11-17
"""
from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = '0001_initial_create_users_and_jobs'
down_revision = None
branch_labels = None
depends_on = None


def upgrade():
op.create_table(
'users',
sa.Column('id', sa.Integer(), primary_key=True),
sa.Column('username', sa.String(length=128), nullable=False, unique=True, index=True),
sa.Column('hashed_password', sa.String(length=256), nullable=False),
sa.Column('is_active', sa.Boolean(), nullable=False, server_default=sa.text('true')),
sa.Column('role', sa.String(length=32), nullable=False, server_default='user'),
sa.Column('created_at', sa.DateTime(), server_default=sa.func.now()),
)

op.create_table(
'backtest_jobs',
sa.Column('id', sa.String(length=64), primary_key=True),
sa.Column('user_id', sa.Integer(), sa.ForeignKey('users.id'), nullable=True),
sa.Column('status', sa.String(length=32), nullable=False, server_default='queued'),
sa.Column('params', sa.JSON(), nullable=True),
sa.Column('result_path', sa.String(length=1024), nullable=True),
sa.Column('created_at', sa.DateTime(), server_default=sa.func.now()),
sa.Column('updated_at', sa.DateTime(), server_default=sa.func.now(), onupdate=sa.func.now()),
)


def downgrade():
op.drop_table('backtest_jobs')
op.drop_table('users')
9 changes: 9 additions & 0 deletions src/quant_research_starter/api/routers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
"""API routers package."""

from fastapi import APIRouter

router = APIRouter()

from . import auth as auth_router # noqa: E402,F401
from . import backtest as backtest_router # noqa: E402,F401
from . import assets as assets_router # noqa: E402,F401
Loading
Loading