Skip to content

Roblox/FAI-RL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

FAI-RL: Foundation AI - Reinforcement Learning Library

License

A production-ready framework for training, inference, evaluation using advanced reinforcement learning techniques. Built for researchers and practitioners who need a flexible, scalable solution for LLM fine-tuning.

Overview

FAI-RL provides a unified, extensible framework for fine-tuning language models with the state-of-the-art algorithms:

  • 🎯 Supports Multiple RL Algorithms: DPO, PPO, GRPO, GSPO implementations as well as support for Supervised Fine-Tuning.
  • πŸš€ Production Ready: Validated on AWS p4d instances with 8x A100 GPUs
  • πŸ“¦ Simple Configuration: YAML-based configs with CLI override support
  • ⚑ Memory Efficient: Full support for LoRA, QLoRA, and DeepSpeed ZeRO-3
  • πŸ”§ Highly Extensible: Custom reward functions, dataset templates, and API integrations

Table of Contents

πŸ“¦ Installation

Install the Package

pip install --extra-index-url https://download.pytorch.org/whl/cu118 FAI-RL

Clone the Repository for Configuration Recipes

git clone https://github.com/Roblox/FAI-RL.git
cd FAI-RL

Package: https://pypi.org/project/FAI-RL/
Note: The --extra-index-url flag ensures PyTorch is installed with CUDA 11.8 support.

πŸ”‘ Authentication & Setup

Before training or using models, you'll need to authenticate with HuggingFace and optionally set up experiment tracking with Weights & Biases.

HuggingFace Authentication

Login to HuggingFace to access models and datasets:

huggingface-cli login

You'll be prompted to enter your HuggingFace access token. You can create a token at https://huggingface.co/settings/tokens.

What this enables:

  • Access gated models (if you have permission)

Weights & Biases (Optional)

Login to Weights & Biases for experiment tracking and visualization:

wandb login

You'll be prompted to enter your W&B API key. Get your API key at https://wandb.ai/authorize.

Note: W&B integration is optional. If not logged in, training will proceed without experiment tracking.

πŸš€ Quick Start

Training

Train a model using any of the supported algorithms (DPO, PPO, GRPO, GSPO, SFT):

# Single GPU training with LoRA
fai-rl-train --recipe recipes/training/sft/llama3_3B_lora.yaml --num-gpus 1

# Multi-GPU training with DeepSpeed
fai-rl-train --recipe recipes/training/dpo/llama3_3B_lora.yaml --num-gpus 8

# Override parameters from CLI
fai-rl-train --recipe recipes/training/sft/llama3_3B_lora.yaml --num-gpus 4 \
  training.learning_rate=5e-5 \
  training.num_train_epochs=3

πŸ“– Complete Training Guide β†’

Inference

Generate text completions from trained or base models:

# Run inference on a trained model
fai-rl-inference --recipe recipes/inference/llama3_3B.yaml

# Use debug mode for detailed logging
fai-rl-inference --recipe recipes/inference/llama3_3B.yaml --debug

πŸ“– Complete Inference Guide β†’

Evaluation

Evaluate model performance on academic benchmarks (MMLU, GSM8K):

# Evaluate on MMLU benchmark
fai-rl-eval --recipe recipes/evaluation/mmlu/llama3_3B.yaml --debug

πŸ“– Complete Evaluation Guide β†’

Supported Algorithms

FAI-RL implements five state-of-the-art reinforcement learning algorithms for language model fine-tuning:

Algorithm Full Name Description Best For
SFT Supervised Fine-Tuning Direct supervised learning from labeled examples Instruction fine-tuning and foundational model fine-tuning
DPO Direct Preference Optimization Alignment via preference learning without explicit reward models Human preference alignment, chat model training
PPO Proximal Policy Optimization Policy gradient method with value function and reward model Complex reward functions, multi-objective optimization
GRPO Group Relative Policy Optimization Efficient preference learning with group-based comparison Reasoning tasks, competitive response generation
GSPO Group Sequence Policy Optimization Advanced sequence-level policy optimization Complex multi-step reasoning, mathematical problem-solving

Training Configurations

All algorithms support three efficiency modes:

Mode Memory Usage Training Speed Best For
Full Fine-tuning High (baseline) Fastest Small models (<3B params), maximum performance
LoRA Low (~10% of full) Fast Most use cases, balanced efficiency
QLoRA Very Low (~3-4GB for 7B model) Moderate Large models on consumer GPUs

Additional features supported across all algorithms:

  • βœ… Multi-GPU training with DeepSpeed ZeRO-3
  • βœ… Gradient checkpointing for memory efficiency
  • βœ… Custom reward functions and dataset templates
  • βœ… Weights & Biases integration for experiment tracking

Key Features

🎯 Flexible Configuration System

  • YAML-based recipes with comprehensive inline documentation for all parameters
  • CLI overrides for runtime parameter changes without editing files
  • Pre-configured templates for popular models (Llama 3, Qwen 3, etc.)
  • Easy experimentation with hyperparameter tuning

πŸ”§ Extensible Architecture

Custom Reward Functions:

  • exact_match_reward_func - Accuracy-based rewards for verifiable tasks
  • structured_xml_reward_func - Format-based rewards for structured outputs
  • Easy to add your custom reward function

Dataset Templates:

  • GSM8KTemplate - Math problem formatting with chain-of-thought
  • OpenMathInstructTemplate - Mathematical instruction formatting

Pluggable Components:

  • Extensible trainer base classes for new algorithms
  • HuggingFace Transformers and TRL integration
  • Custom dataset processing pipelines

🌐 Multi-Provider API Support

Native support for commercial LLM APIs with automatic provider detection for inference and evaluation:

Supported Providers:

  • πŸ€– OpenAI (GPT-5, GPT-4.5, GPT-4.1, etc.)
  • 🧠 Google (Gemini Pro, Gemini Flash)
  • πŸ’¬ Anthropic (Claude 4.5 Sonnet, Opus, etc.)
  • 🏠 Hosted LLM (self-hosted or custom endpoints)

Configuration Example:

# OpenAI ChatGPT - provider detected from endpoint URL
inference:
  api_endpoint: "https://api.openai.com/v1/chat/completions"
  api_key: "sk-..."
  model: "gpt-4.1"  # Just the model name, no prefix needed!

# Google Gemini - provider detected from endpoint URL
inference:
  api_endpoint: "https://generativelanguage.googleapis.com/v1/models/gemini-pro:generateContent"
  api_key: "AIza..."
  model: "gemini-2.5-pro"

# Anthropic Claude - provider detected from endpoint URL
inference:
  api_endpoint: "https://api.anthropic.com/v1/messages"
  api_key: "sk-ant-..."
  model: "claude-sonnet-4-5-20250929"

# Hosted LLM - any custom or self-hosted model endpoint
inference:
  api_endpoint: "https://your-hosted-endpoint.com/v1/chat"
  api_key: "your-api-key"
  model: "your-model-name"

Customization for Custom APIs:

If your hosted LLM uses a non-OpenAI format, customize utils/hosted_llm_config.py:

  • build_hosted_llm_request() - Modify request payload format
  • parse_hosted_llm_response() - Customize response parsing
  • build_hosted_llm_headers() - Adjust authentication headers

Each function includes detailed examples and inline documentation.

πŸ“ Project Structure

FAI-RL/
β”œβ”€β”€ core/                      # Core framework components
β”œβ”€β”€ trainers/                  # Algorithm implementations
β”‚   β”œβ”€β”€ rewards/               # Custom reward functions
β”‚   β”‚   β”œβ”€β”€ accuracy_rewards.py
β”‚   β”‚   └── format_rewards.py
β”‚   └── templates/             # Dataset formatting templates
β”‚       β”œβ”€β”€ gsm8k_template.py
β”‚       └── openmathinstruct_template.py
β”œβ”€β”€ inference/                 # Inference system
β”œβ”€β”€ evaluations/               # Evaluation system
β”‚   └── eval_datasets/         # Dataset-specific evaluation logic
β”‚       β”œβ”€β”€ mmlu.py
β”‚       └── gsm8k.py
β”œβ”€β”€ recipes/                   # YAML configuration files
β”‚   β”œβ”€β”€ training/              # Training recipes (sft/, dpo/, ppo/, grpo/, gspo/)
β”‚   β”œβ”€β”€ inference/             # Inference recipes
β”‚   └── evaluation/            # Evaluation recipes (mmlu/, gsm8k/)
β”œβ”€β”€ configs/                   # DeepSpeed configurations
β”‚   └── deepspeed/             # ZeRO-3 configs for 1/2/4/8 GPUs
β”œβ”€β”€ utils/                     # Shared utilities
β”‚   └── hosted_llm_config.py   # Custom API endpoint configuration
└── [auto-generated]
    β”œβ”€β”€ models/                # Trained model checkpoints
    β”œβ”€β”€ outputs/               # Inference and evaluation results
    └── logs/                  # Training logs

Memory Optimization

FAI-RL provides multiple techniques for efficient training of large models on limited hardware:

Optimization Techniques

Technique Memory Savings Speed Impact Configuration
LoRA ~90% reduction Minimal use_lora: true + LoRA params
QLoRA ~95% reduction Moderate load_in_4bit: true + LoRA params
8-bit Quantization ~50% reduction Minimal load_in_8bit: true
Gradient Checkpointing ~30-50% reduction 20% slower gradient_checkpointing: true
DeepSpeed ZeRO-3 Distributed across GPUs Varies Auto-enabled for multi-GPU

Optimization Strategy

  1. Start with QLoRA if GPU memory is limited (<16GB)
  2. Use LoRA for balanced efficiency on mid-range GPUs (16-40GB)
  3. Full fine-tuning only for small models or high-end GPUs (80GB+)
  4. Enable gradient checkpointing if still encountering OOM errors
  5. Use DeepSpeed ZeRO-3 for multi-GPU setups to distribute memory load

πŸ§ͺ System Requirements

Validated on Hardware

This framework has been validated on:

  • Instance: AWS EC2 p4d.24xlarge
  • GPUs: 8 x NVIDIA A100-SXM4-80GB (80GB VRAM each)
  • CPU: 96 vCPUs
  • Memory: 1152 GiB
  • Storage: 8TB NVMe SSD
  • Network: 400 Gbps

πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

For Maintainers

Publishing a New Release
  1. Update version in pyproject.toml:
[project]
name = "FAI-RL"
version = "X.Y.Z"  # Increment version
  1. Build and publish:
# Install build tools
pip install --upgrade pip build twine

# Clean previous builds
rm -rf dist/ build/ *.egg-info

# Build the package
python -m build

# Upload to PyPI (requires credentials)
python -m twine upload dist/*

About

Foundation AI - Reinforcement Learning Library

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages