Skip to content

Commit 54a70fb

Browse files
authored
Update README.md (#271)
* Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * remove doc links * Update README.md * add
1 parent 34b6ac7 commit 54a70fb

File tree

2 files changed

+11
-48
lines changed

2 files changed

+11
-48
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,3 +199,4 @@ assets/wheels/vllm*.whl
199199
# DCP artifacts
200200
model_state_dict/
201201
forge_dcp_tmp/
202+
demo_top_down.md

README.md

Lines changed: 10 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# <img width="35" height="35" alt="image" src="https://github.com/user-attachments/assets/2700a971-e5d6-4036-b03f-2f89c9791609" /> Forge
22

33

4-
#### A PyTorch native agentic library for RL post-training and agentic development
4+
#### A PyTorch native agentic library for RL post-training and agentic development that lets you focus on algorithms instead of writing infra code.
55

66
## Overview
7-
Forge was built with one core principle in mind: researchers should write algorithms, not infrastructure. Forge introduces a “service”-centric architecture that provides the right abstractions for distributed complexity. When you need fine-grained control over placement, fault handling or communication patterns, the primitives are there. When you don’t, you can focus purely on your RL algorithm.
7+
The primary purpose of the Forge ecosystem is to delineate infra concerns from model concerns thereby making RL experimentation easier. Forge delivers this by providing clear RL abstractions and one scalable implementation of these abstractions. When you need fine-grained control over placement, fault handling/redirecting training loads during a run, or communication patterns, the primitives are there. When you don’t, you can focus purely on your RL algorithm.
88

99
Key features:
1010
- Usability for rapid research (isolating the RL loop from infrastructure)
@@ -18,15 +18,19 @@ Key features:
1818
> work. It's recommended that you signal your intention to contribute in the
1919
> issue tracker, either by filing a new issue or by claiming an existing one.
2020
21-
## 📖 Documentation
21+
## 📖 Documentation (Coming Soon)
2222

23-
View Forge's hosted documentation [at this link](https://meta-pytorch.org/forge/).
23+
View Forge's hosted documentation (coming soon)
24+
25+
## Tutorials
26+
27+
You can also find our notebook tutorials (coming soon)
2428

2529
## Installation
2630

2731
### Basic
2832

29-
Forge requires the latest PyTorch nightly with Monarch, vLLM, and torchtitan. For convenience,
33+
Forge requires the latest PyTorch nightly with [Monarch](https://github.com/meta-pytorch/monarch), [vLLM](https://github.com/vllm-project/vllm), and [torchtitan](https://github.com/pytorch/torchtitan). For convenience,
3034
we have pre-packaged these dependencies as wheels in assets/wheels. (Note that the basic install script
3135
uses [DNF](https://docs.fedoraproject.org/en-US/quick-docs/dnf/), but could be easily extended to other Linux OS.)
3236

@@ -40,7 +44,7 @@ conda activate forge
4044

4145
Optional: By default, the packages installation uses conda. If user wants to install system packages on the target machine instead of conda, they can pass the `--use-sudo` to the installation script: `./script/install.sh --use-sudo`.
4246

43-
After install, you can run the following command and should see output confirming GRPO training is running (you need a minimum 3 GPU devices).
47+
After install, you can run the following command and should see output confirming GRPO training is running (you need a minimum 3 GPU devices):
4448

4549
```
4650
python -m apps.grpo.main --config apps/grpo/qwen3_1_7b.yaml
@@ -56,48 +60,6 @@ For your information, since the vLLM wheel is too large for GitHub, we uploaded
5660
$ gh release create v0.0.0 assets/wheels/vllm-*.whl --title "Forge Wheels v0.0.0"
5761
```
5862

59-
### Meta Internal Build (Alternative Route)
60-
61-
1. Build uv package
62-
63-
```bash
64-
curl -LsSf https://astral.sh/uv/install.sh | sh
65-
git clone https://github.com/pytorch-labs/forge
66-
cd forge
67-
uv sync --all-extras
68-
source .venv/bin/activate
69-
```
70-
71-
2. Setup CUDA on local machine
72-
73-
```bash
74-
# feature install if you don't have /user/local/cuda-12.8
75-
feature install --persist cuda_12_9
76-
77-
# add env variables
78-
export CUDA_VERSION=12.9
79-
export NVCC=/usr/local/cuda-$CUDA_VERSION/bin/nvcc
80-
export CUDA_NVCC_EXECUTABLE=/usr/local/cuda-$CUDA_VERSION/bin/nvcc
81-
export CUDA_HOME=/usr/local/cuda-$CUDA_VERSION
82-
export PATH="$CUDA_HOME/bin:$PATH"
83-
export CUDA_INCLUDE_DIRS=$CUDA_HOME/include
84-
export CUDA_CUDART_LIBRARY=$CUDA_HOME/lib64/libcudart.so
85-
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
86-
```
87-
88-
3. Build vllm from source
89-
90-
```bash
91-
git clone https://github.com/vllm-project/vllm.git --branch v0.10.0
92-
cd vllm
93-
python use_existing_torch.py
94-
uv pip install -r requirements/build.txt
95-
uv pip install --no-build-isolation -e .
96-
```
97-
98-
> [!WARNING]
99-
> If you add packages to the pyproject.toml, use `uv sync --inexact` so it doesn't remove Monarch and vLLM
100-
10163
## Quick Start
10264

10365
To run SFT for Llama3 8B, run

0 commit comments

Comments
 (0)