Lora fine-tuning using PEFT Library#204
Open
sidhantls wants to merge 2 commits intohuggingface:mainfrom
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds a LORA implementation for parameter efficient fine-tuning of Parler TTS
Address #183 #158 and other request
Feature
This PR adds PEFT support with Low-Rank adapters (LORA) for fine-tuning Parler-TTS on new datasets.
LORA is applied to the Parler-TTS decoder Transformer where PEFT is applied to Linear projection layers. Fine-tuning with lora trains only 0.5% of parameters for Parler Mini
Benefits
An alternative implementation of PR #159, which enables training with lora, loading checkpoints and final LORA model. Moreover, it uses the "peft" library, rather than #159, which was a custom implementation. Moreover, this PR allows loading saved checkpoints, which was not possible in #159
How to use:
Fine-Tuning:
When running
accelerate launch ./training/run_parler_tts_training.pyfor fine-tuning, use--use_lora true --lora_r 8 --lora_alpha 16 --lora_dropout 0.05Loading Checkpoints: