hi,thanks for your great work. I have some question on the section "Continual-Finetuning of WeLore Compressed Models"
Q1: Firstly, you perform rank reduction in Llama7B in your paper, then using lora and welore to continual-finetuning. I read your code, performing rank reduction will product two matrix 'U,V', then using lora and welore, which means produce four matrix for 'U,V'? Meanwhile, using welore to finetune, which means you guys perform rank reduction again?
Q2: Secondly, you use lora to finetune, your parameters is not shown, such as lora rank.