This repository was archived by the owner on Jun 3, 2025. It is now read-only.
SparseML v1.5.0
New Features:
- PyTorch 1.13 support (#1143)
- Enabled patch versions for torchvision 0.14.x (#1557)
- YOLOv8 sparsification pipelines (view)
- Per layer distillation support for PyTorch Distillation modifier (#1272)
- Torchvision training pipelines:
- Product usage analytics tracking; to disable, run the command
export NM_DISABLE_ANALYTICS=True(#1487)
Changes:
- Transformers and YOLOv5 integrations migrated from auto install to install from PyPI packages. Going forward,
pip install sparseml[transformers]andpip install sparseml[yolov5]will need to be used. - Error message updated when utilizing wandb loggers and wandb is not installed in the environment, telling user to pip install wandb. (#1374)
- Keras and TensorFlow tests have been removed; these are no longer actively supported pathways.
scikit-learnnow replaced withsklearnto stay current with dependency name changes. (#1294)
Resolved Issues:
- Using recipes that utilized the legacy PyTorch QuantizationModifier with DDP when restoring weights for sparse transfer no longer crashes. (#1490)
- If labels were not being set correctly when utilizing a distillation teacher different from the student with token classification pipelines, training runs would crash. (#1414)
- Q/DQ folding fixed on ONNX export for quantization nodes occurring before Softmax in transformer graphs; performance issues would result for some transformer models in DeepSparse. (#1343)
- Inaccurate metrics calculations for torchvision training pipelines led to discrepancies in top1 and top5 accuracies by ~1%. (#1341)
Known Issues:
- None