Skip to content

Commit 999dcc9

Browse files
author
root
committed
降低max len#
1 parent 3cc1aa8 commit 999dcc9

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

examples/werewolf/train.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ python -m agentlightning.verl \
2424
actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=1 \
2525
actor_rollout_ref.rollout.multi_turn.format=hermes \
2626
actor_rollout_ref.model.path=${BASE_MODEL} \
27-
data.max_prompt_length=16384 \
27+
data.max_prompt_length=10240 \
2828
data.max_response_length=1024 \
2929
data.truncation='error' \
3030
trainer.val_before_train=True \

examples/werewolf/werewolf_agent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -360,7 +360,7 @@ def _process_triplets_with_rewards(self, wolf_win_flag: bool, NAME_TO_ROLE: dict
360360
# 添加日志检查
361361
prompt_length = len(prompt_ids)
362362
print(f"Prompt length: {prompt_length} tokens")
363-
# if prompt_length >= 16384: # 你的 max_prompt_length TODO: 过长上下文发送处理.拆掉上下文中的think
363+
# if prompt_length >= 10240: # 你的 max_prompt_length TODO: 过长上下文发送处理.拆掉上下文中的think
364364
# print(f"WARNING: Prompt truncated! Original length: {prompt_length}")
365365
prompt = self.tokenizer.decode(prompt_ids)
366366
response = self.tokenizer.decode(response_ids)

0 commit comments

Comments
 (0)