Skip to content

whanyu1212/QuantRL-Lab

Repository files navigation

QuantRL-Lab

A Python testbed for Reinforcement Learning in finance, designed to enable researchers and developers to experiment with and evaluate RL algorithms in financial contexts. The project emphasizes modularity and configurability, allowing users to tailor the environment, data sources, and algorithmic settings to their specific needs

Table of Contents

Motivation

Addressing the Monolithic Environment Problem

Most existing RL frameworks for finance suffer from tightly coupled, monolithic designs where action spaces, observation spaces, and reward functions are hardcoded into the environment initialization. This creates several critical limitations:

  • Limited Experimentation: Users cannot easily test different reward formulations or action spaces without doing a lot of rewriting of the environments
  • Poor Scalability: Adding new asset classes, trading strategies, or market conditions requires significant code restructuring
  • Reduced Reproducibility: Inconsistent interfaces across different environment configurations make fair comparisons difficult
  • Development Overhead: Simple modifications like testing different reward functions or adding new observation features require extensive refactoring

The framework tries to demonstrate the following workflow:

  1. Flexible Data Acquisition: Aggregate market data from multiple heterogeneous sources with unified interfaces
  2. Feature Engineering: Systematic selection and analysis of technical indicators (based on vectorized backtesting) for optimal signal generation
  3. Data Processing: Enrich datasets with technical indicators and sentiment analysis from news sources
  4. Environment Configuration: Define trading environments with customizable parameters (portfolio allocation, transaction costs, slippage, observation windows)
  5. Algorithm Training & Tuning: Execute RL algorithm training with preset or configurable hyperparameters
  6. Performance Evaluation: Assess model performance and action distribution
  7. Comparative Analysis: Generate detailed performance reports
flowchart LR
    A[Data Acquisition<br/>Multiple Sources] --> B[Data Processing<br/>Technical Indicators & Sentiment]
    A -.-> C[Feature Engineering<br/>& Selection]
    C -.-> B
    B --> D[Environment Configuration<br/>Action/Reward/Observation Strategies]
    D --> E[Algorithm Training<br/>RL Policy Learning]
    E -.-> F[Hyperparameter Tuning<br/>Optuna Optimization]
    F -.-> E
    E --> G[Performance Evaluation<br/>Metrics & Action Analysis]
    G --> H[Comparative Analysis<br/>Strategy Reports]

    style A fill:#e1f5fe
    style B fill:#f3e5f5
    style C fill:#fff3e0
    style D fill:#e8f5e8
    style E fill:#fce4ec
    style F fill:#fff8e1
    style G fill:#e0f2f1
    style H fill:#f1f8e9

    classDef optional stroke-dasharray: 5 5
    class C,F optional
Loading

Example usage:

# Easily swappable strategies for experimentation
# For in depth example, please refer to the backtesting_example.ipynb

sample_env_config = BacktestRunner.create_env_config_factory(
    train_data=train_data_df,
    test_data=test_data_df,
    action_strategy=action_strategy,
    reward_strategy=reward_strategies["conservative"],
    observation_strategy=observation_strategy,
    initial_balance=100000.0,
    transaction_cost_pct=0.001,
    window_size=20
)

runner = BacktestRunner(verbose=1)

# Single experiment
results = runner.run_single_experiment(
    SAC,          # Algorithm to use
    sample_env_config,
    # config=custom_sac_config,  # an optional input arg
    total_timesteps=50000,  # Total timesteps for training
    num_eval_episodes=3
)

BacktestRunner.inspect_single_experiment(results)

# More combinations
presets = ["default", "explorative", "conservative"]

algorithms = [PPO, A2C, SAC]

comprehensive_results = runner.run_comprehensive_backtest(
    algorithms=algorithms,
    env_configs=env_configs,
    presets=presets,
    # custom_configs=custom_configs,  # either use presets or customize config by yourself
    total_timesteps=50000,
    n_envs=4,
    num_eval_episodes=3
)

For more detailed use cases, please refer to the notebooks:


Roadmap 🔄

  • Data Source Expansion:
    • Complete Integration for more (free) data sources
    • Add Cryto data support
    • Add OANDA forex data support
  • Technical Indicators:
    • Add more indicators (Ichimoku, Williams %R, CCI, etc.)
  • Trading Environments:
    • (In-progress) Multi-stock trading environment with hedging pair capabilities
  • Alternative Data for consideration in observable space:
    • Fundamental data (earnings, balance sheets, income statements, cash flow)
    • Macroeconomic indicators (GDP, inflation, unemployment, interest rates)
    • Economic calendar events
    • Sector performance data

Setup Guide

  1. Clone the Repository
git clone https://github.com/whanyu1212/QuantRL-Lab.git
  1. Install Poetry for dependency management
curl -sSL https://install.python-poetry.org | python3 -
  1. Sync dependencies (It also installs the current project in dev mode)
poetry install
  1. Activate virtual environment (Note that the shell command is deprecated in the latest poetry version)
poetry env activate
# a venv path will be printed in the terminal, just copy and run it
# e.g.,
source /home/codespace/.cache/pypoetry/virtualenvs/multi-agent-quant-cj6_z41n-py3.12/bin/activate
  1. Install jupyter kernel
# You can change the name and display name according to your preference
python -m ipykernel install --user --name multi-agent-quant --display-name "Multi Agent Quant"
  1. Set up environment variables
# Copy the example environment file
cp .env.example .env

# Open .env file and replace the placeholder values with your actual credentials
# You can use any text editor, here using VS Code
code .env

Make sure to replace all placeholder values in the .env file with your actual API keys and credentials. Never commit the .env file to version control.


  1. Set up pre-commit hooks
# Install pre-commit
poetry add pre-commit

# Install the git hooks
pre-commit install

# Optional: run pre-commit on all files
pre-commit run --all-files

The pre-commit hooks will check for:

  • Code formatting (black)
  • Import sorting (isort)
  • Code linting (flake8)
  • Docstring formatting (docformatter)
  • Basic file checks (trailing whitespace, YAML validation, etc.)

To skip pre-commit hooks temporarily:

git commit -m "your message" --no-verify

For more details, please refer to .pre-commit-config.yaml file.


Literature Review

Releases

No releases published

Packages

No packages published