Skip to content

Commit 799e9d3

Browse files
authored
Correct GRPO to Group Relative Policy Optimization (#427)
1 parent 144b6ef commit 799e9d3

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/source/getting_started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,5 @@ Welcome to TorchForge! This guide will help you get up and running with TorchFor
55
TorchForge specializes in post-training techniques for large language models, including:
66

77
- **Supervised Fine-Tuning (SFT)**: Adapt pre-trained models to specific tasks using labeled data
8-
- **Generalized Reward Policy Optimization (GRPO)**: Advanced reinforcement learning for model alignment
8+
- **Group Relative Policy Optimization (GRPO)**: Advanced reinforcement learning for model alignment
99
- **Multi-GPU Distributed Training**: Efficient scaling across multiple GPUs and nodes

docs/source/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Key Features
77
------------
88

99
* **Post-Training Focus**: Specializes in techniques
10-
like Supervised Fine-Tuning (SFT) and Generalized Reward Policy Optimization (GRPO)
10+
like Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO)
1111
* **PyTorch Integration**: Built natively on PyTorch with
1212
dependencies on [PyTorch nightly](https://pytorch.org/get-started/locally/),
1313
[Monarch](https://meta-pytorch.org/monarch), [vLLM](https://docs.vllm.ai/en/latest/),

0 commit comments

Comments
 (0)