Skip to content
Merged
Show file tree
Hide file tree
Changes from 73 commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
f10285e
support prompts or token IDs in VLLMClient and update API request han…
qgallouedec Mar 5, 2026
7d2bb67
test
qgallouedec Mar 5, 2026
3b356ac
consistency
qgallouedec Mar 5, 2026
82c4508
fix
qgallouedec Mar 5, 2026
3ea2fcf
another fix
qgallouedec Mar 5, 2026
445f4ba
fix docstring
qgallouedec Mar 5, 2026
8c6c88d
Add support for multi-modal inputs in VLLMClient and vllm_serve
qgallouedec Mar 5, 2026
f617b2d
Merge branch 'main' into vllm-accept-token-ids
qgallouedec Mar 6, 2026
eaffd67
Merge branch 'main' into vllm-accept-token-ids
qgallouedec Mar 6, 2026
f3f6a5d
Move `rollout_func from `_generate_single_turn` to `_generate`
qgallouedec Mar 6, 2026
d417543
fix style
qgallouedec Mar 6, 2026
4b927d6
support multi-image
qgallouedec Mar 6, 2026
029fc1f
style
qgallouedec Mar 6, 2026
20b4039
Merge branch 'vllm-accept-token-ids' into vllm-support-image-with-raw…
qgallouedec Mar 6, 2026
b8e3912
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 6, 2026
07181cb
Fix handling of images in OnlineDPOTrainer to ensure proper structure…
qgallouedec Mar 7, 2026
6ff1e56
Merge branch 'main' into vllm-accept-token-ids
qgallouedec Mar 7, 2026
9f340e4
Merge branch 'vllm-accept-token-ids' into vllm-support-image-with-raw…
qgallouedec Mar 7, 2026
d138be7
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 7, 2026
09128d6
Move tokenization before vLLM generation call
qgallouedec Mar 7, 2026
7fd1711
Fix deadlock issue by ensuring images are always gathered in VLLMGene…
qgallouedec Mar 7, 2026
3ab04b0
Unify tokenization across all generation backends in _generate_single…
qgallouedec Mar 7, 2026
5d6d067
Extract tokenization out of _generate_single_turn into _tokenize_prompts
qgallouedec Mar 7, 2026
b4d2c34
Enhance multimodal input handling in GRPO and RLOO trainers by adding…
qgallouedec Mar 7, 2026
4922362
style
qgallouedec Mar 7, 2026
37c48b3
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 7, 2026
0a264a2
Fix tokenization padding issue in GRPOTrainer to handle unpadded inpu…
qgallouedec Mar 7, 2026
0aa0e30
style
qgallouedec Mar 7, 2026
b490357
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 7, 2026
8fecba1
align rloo
qgallouedec Mar 7, 2026
6c093dd
style
qgallouedec Mar 7, 2026
a9a91c7
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 7, 2026
f033e63
revert doc modif
qgallouedec Mar 9, 2026
5a1f609
Merge branch 'vllm-accept-token-ids' into vllm-support-image-with-raw…
qgallouedec Mar 9, 2026
1eb3540
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 9, 2026
498a564
Merge branch 'move-rollout-func' into vllm-generate-with-token-ids
qgallouedec Mar 9, 2026
be2ff99
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 9, 2026
5df2069
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 9, 2026
d3f7971
Merge branch 'main' into vllm-support-image-with-raw-token
qgallouedec Mar 9, 2026
319d52a
simplify multimodal
qgallouedec Mar 9, 2026
d5e1906
Merge branch 'main' into vllm-support-image-with-raw-token
qgallouedec Mar 9, 2026
4ccadcf
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 9, 2026
2a80df9
Merge branch 'move-rollout-func' into vllm-generate-with-token-ids
qgallouedec Mar 9, 2026
a0df552
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 9, 2026
3350588
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 9, 2026
0558dc9
Merge branch 'main' into move-rollout-func
qgallouedec Mar 9, 2026
6ebb681
Merge branch 'move-rollout-func' into vllm-generate-with-token-ids
qgallouedec Mar 9, 2026
93640e4
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 9, 2026
1c009b0
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 9, 2026
97a813b
Merge branch 'main' into vllm-generate-with-token-ids
qgallouedec Mar 10, 2026
83ab9bd
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 10, 2026
408fb2e
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
ade2831
Merge branch 'main' into vllm-generate-with-token-ids
qgallouedec Mar 10, 2026
258e0a8
Update trl/trainer/grpo_trainer.py
qgallouedec Mar 10, 2026
ef96048
Update trl/trainer/rloo_trainer.py
qgallouedec Mar 10, 2026
0ee6495
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 10, 2026
bb6dc69
Update trl/trainer/grpo_trainer.py
qgallouedec Mar 10, 2026
0effa0d
Update trl/trainer/rloo_trainer.py
qgallouedec Mar 10, 2026
fad1fdd
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
b35f250
Remove unused chat/tool configuration parameters from VLLM and RLOO t…
qgallouedec Mar 10, 2026
040e392
Update trl/generation/vllm_generation.py
qgallouedec Mar 10, 2026
ca2cae3
Update trl/trainer/rloo_trainer.py
qgallouedec Mar 10, 2026
fee553d
Merge branch 'main' into vllm-generate-with-token-ids
qgallouedec Mar 10, 2026
90df2de
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 10, 2026
f36c0ea
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
fdaa90a
fix
qgallouedec Mar 10, 2026
6f10cd2
style
qgallouedec Mar 10, 2026
533c337
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
7e7e3b3
Merge branch 'main' into unify-tokenization-generate
qgallouedec Mar 10, 2026
31d8a0c
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
8b4f6af
Merge branch 'main' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
81cf273
Merge branch 'main' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
918686b
Remove dead code: eliminate prompt tokenization logic from GRPOTraine…
qgallouedec Mar 10, 2026
9b8de83
remove unused extra_fields from _generate_single_turn return value
qgallouedec Mar 10, 2026
6c8f55c
style
qgallouedec Mar 10, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 14 additions & 9 deletions trl/trainer/grpo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -1211,11 +1211,8 @@ async def _run_async_funcs():
rewards_per_func = gather(rewards_per_func)
return rewards_per_func

def _generate_single_turn(self, prompts: list):
device = self.accelerator.device
mode = "train" if self.model.training else "eval"

# Tokenize prompts once, shared across all generation backends
def _tokenize_prompts(self, prompts: list):
"""Tokenize prompts and extract images/multimodal fields for generation."""
if is_conversational({"prompt": prompts[0]}):
# Extract images from messages for VLM support
images = []
Expand Down Expand Up @@ -1255,6 +1252,11 @@ def _generate_single_turn(self, prompts: list):
prompt_ids = self.processing_class(text=prompts)["input_ids"]
images = None
multimodal_fields = {}
return prompt_ids, images, multimodal_fields

def _generate_single_turn(self, prompt_ids, images, multimodal_fields):
device = self.accelerator.device
mode = "train" if self.model.training else "eval"

# Generate completions using either vLLM or regular generation
if self.use_vllm:
Expand Down Expand Up @@ -1456,8 +1458,9 @@ async def _run_async_tools(async_coros):
break # all overlong, exit tool loop

# Generate new completions after tool execution
prompt_completion_tool_ids, post_tool_ids, post_tool_logprobs = self._generate_single_turn(
prompt_completion_tools
pct_prompt_ids, pct_images, pct_multimodal_fields = self._tokenize_prompts(prompt_completion_tools)
prompt_completion_tool_ids, post_tool_ids, post_tool_logprobs, _ = self._generate_single_turn(
pct_prompt_ids, pct_images, pct_multimodal_fields
)

# Sanity check: from experience, this is useful to catch bugs in the chat template
Expand Down Expand Up @@ -1549,8 +1552,10 @@ def _generate(self, prompts: list):
extra_fields = {k: v for k, v in output.items() if k not in required_keys}
prompt_ids, completion_ids, logprobs = output["prompt_ids"], output["completion_ids"], output["logprobs"]
else:
prompt_ids, completion_ids, logprobs = self._generate_single_turn(prompts)
extra_fields = {}
prompt_ids, images, multimodal_fields = self._tokenize_prompts(prompts)
prompt_ids, completion_ids, logprobs, extra_fields = self._generate_single_turn(
prompt_ids, images, multimodal_fields
)

# Decode completions. It's important to use `parse_response` when possible, because it handles tool calls.
if is_conversational({"prompt": prompts[0]}):
Expand Down
15 changes: 9 additions & 6 deletions trl/trainer/rloo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -885,11 +885,8 @@ async def _run_async_funcs():
rewards_per_func = gather(rewards_per_func)
return rewards_per_func

def _generate_single_turn(self, prompts: list):
device = self.accelerator.device
mode = "train" if self.model.training else "eval"

# Tokenize prompts once, shared across all generation backends
def _tokenize_prompts(self, prompts: list):
"""Tokenize prompts and extract images/multimodal fields for generation."""
if is_conversational({"prompt": prompts[0]}):
# Extract images from messages for VLM support
images = []
Expand Down Expand Up @@ -927,6 +924,11 @@ def _generate_single_turn(self, prompts: list):
prompt_ids = self.processing_class(text=prompts)["input_ids"]
images = None
multimodal_fields = {}
return prompt_ids, images, multimodal_fields

def _generate_single_turn(self, prompt_ids, images, multimodal_fields):
device = self.accelerator.device
mode = "train" if self.model.training else "eval"

# Generate completions using either vLLM or regular generation
if self.use_vllm:
Expand Down Expand Up @@ -1026,7 +1028,8 @@ def _generate(self, prompts: list):
# Copy the prompts to avoid modifying the original list
prompts = copy.deepcopy(prompts)

prompt_ids, completion_ids = self._generate_single_turn(prompts)
prompt_ids, images, multimodal_fields = self._tokenize_prompts(prompts)
prompt_ids, completion_ids = self._generate_single_turn(prompt_ids, images, multimodal_fields)

# Decode completions. It's important to use `parse_response` when possible, because it handles tool calls.
if is_conversational({"prompt": prompts[0]}):
Expand Down
Loading