-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Hi @tbenthompson, thanks for your work on this repo.
I’m trying to reproduce the experiments in Section 6.1 of the paper: Single Task, Single Model.
I’m having some trouble understanding the method pipeline. I’m following the demo you provided, but I’m running into the following error:
Traceback (most recent call last): File "/disk1/home/rmura/RepBertAttack/flrt/run_FLRT.py", line 96, in <module> run_flrt(args.model, args.device) File "/disk1/home/rmura/RepBertAttack/flrt/run_FLRT.py", line 86, in run_flrt result = attack.attack(cfg) File "/disk1/home/rmura/RepBertAttack/flrt/flrt/attack.py", line 157, in attack _attack(c) File "/disk1/home/rmura/miniconda3/envs/flrt/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/disk1/home/rmura/RepBertAttack/flrt/flrt/attack.py", line 207, in _attack victims_dict[mn] = objs[0].victim.setup( File "/disk1/home/rmura/RepBertAttack/flrt/flrt/victim.py", line 249, in setup ft_model = PeftModel.from_pretrained( File "/disk1/home/rmura/miniconda3/envs/flrt/lib/python3.10/site-packages/peft/peft_model.py", line 439, in from_pretrained PeftConfig._get_peft_type( File "/disk1/home/rmura/miniconda3/envs/flrt/lib/python3.10/site-packages/peft/config.py", line 266, in _get_peft_type raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'") ValueError: Can't find 'adapter_config.json' at 'output/flrt-finetune/Llama-2-7b-chat-hf-240506_223106'
It seems like there’s no access to the “toxified” fine-tuned model—there are no PEFT models in the repo.
Are we supposed to recreate a toxified copy of the victim model? If so, could you share some guidance on how you did it?