You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using FSDP with trainable tokens, there was an error when
retrieving the state_dict of the TrainableTokensWrapper. The reason is
that for the state_dict that is passed to get_peft_model_state_dict, the
FSDP wrapper was already unwrapped, which means the keys don't have the
FSDP-specific prefix. However, in the PEFT code, when looking up keys
from said state_dict, the prefix was not removed. Now it is removed,
making the lookup succeed. The same logic applies to
set_peft_model_state_dict.
I could successfully start training with FSDP and trainable tokens
locally by adjusting the examples/sft script to include trainable
tokens. Checkpoints could be successfully created and resumed from. The
only change I needed to make was to configure use_orig_params=True for
FSDP.
0 commit comments