End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment
-
Updated
Dec 27, 2025 - Jupyter Notebook
End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment
Parameter-efficient optimization of conditional diffusion models using multi-resolution attention, classifier-free guidance ablation, and DDIM sampling — achieving 17% FID improvement with 85% reduced training time.
Add a description, image, and links to the fp16-training topic page so that developers can more easily learn about it.
To associate your repository with the fp16-training topic, visit your repo's landing page and select "manage topics."