Skip to content

v3.0.0

Choose a tag to compare

@willmj willmj released this 22 Jul 13:33
· 40 commits to release since this release
d8cb1cb

Image: quay.io/modh/fms-hf-tuning:v3.0.0

Summary of Changes

Activated LoRA Support

  • Support for Activated LoRA model tuning
  • Usage is very similar to standard LoRA, with the key difference that an invocation_string must be specified
  • Available by setting --peft_method to alora
  • Inference with aLoRA models requires insuring that the invocation string is present in the input

Data Preprocessor Changes

  • Breaking Changes to the data preprocessor interface, now utilizing conventional handler and parameter names from HF datasets in data configs
  • rename and retain are now their own data handlers, not data config parameters
  • Add flexible train/test dataset splitting by using the split parameter in data configs
  • Merge offline data preprocessor script into main library, can now only preprocess data using --do_dataprocessing_only

Dependency Updates

  • peft from <0.14 to <0.15.2
  • flash-attn from <3.0 to <2.8
  • accelerate from <1.1 to <1.7
  • transformers from <4.51 to <=4.54.4
  • torch from <2.5 to <2.7

Additional Changes

  • Updates to tracker framework, additon of ClearML tracker

What's Changed

New Contributors

Full Changelog: v2.8.2...v3.0.0