Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit edcce80

Browse files
markurtzjeanniefinksbfineraneldarkurticnatuan
authored
Rebase release/0.1 off of main for 0.1.1 (#87)
* GA code, toctree links (#61) - added tracking for docs output - added help links for docs output * Update README.md (#64) removed placeholder reference to comingsoon repo in favor of active repo * add decorator for flaky tf sparsity tests (#65) * enable modifier groups in SparseML recipes (#66) * enable modifier groups in SparseML recipes * unit tests for YAML modifier list loading * make build argument for nightly builds (#63) * rst url syntax correction (#67) correcting double slash at the end of URLs with updates to index.rst pre-compilation * match types explicitly in torch qat quant observer wrappers (#68) * docs updates (#71) enhancing left nav for Help; after this merge, the docs need to be rebuilt for this repo so docs.neuralmagic.com can be refreshed. cc @markurtz * Rename KSModifier to PruningModifier (#76) * Rename ConstantKSModifier to ConstantPruningModifier * Rename GradualKSModifier to GMPruningModifier * Fix broken link for Optimization Recipes (#75) * Serialize/deserialize MaskedLayer (#69) * Serialize/deserialize MaskedLayer * Remove unused symbols * Register pruning scheduler classes for serialization * removed ScheduledOptimizer, moved logic to ScheduledModifierManager (#77) * load recipes directly from SparseZoo (#72) * load recipes into Managers from sparsezoo stubs * moving recipe_type handling to SparseZoo only, supporting loading SparseZoo recipe objects * Revert "removed ScheduledOptimizer, moved logic to ScheduledModifierManager (#77)" (#80) This reverts commit 6073abb. * Update for 0.1.1 release (#82) * Update for 0.1.1 release - update python version to 0.1.1 - setup.py add in version parts and _VERSION_MAJOR_MINOR for more flexibility with dependencies between neural magic packages - add in deepsparse optional install pathway * missed updating version to 0.1.1 * rename examples directory to integrations (#78) * rwightman/pytorch-image-models integration (#70) * load checkpoint file based on sparsezoo recipe in pytorch_vision script (#83) * ultralytics/yolov5 integration (#73) * pytorch sparse quantized transfer learning notebook (#81) * load qat onnx models for conversion from file path (#86) * Sparsification update (#84) * Sparsification update - update sparsification descriptions and move to preferred verbage * update from comments on deepsparse for sparsification * Update README.md Co-authored-by: Jeannie Finks <[email protected]> * Update README.md Co-authored-by: Jeannie Finks <[email protected]> * Update README.md Co-authored-by: Jeannie Finks <[email protected]> * Update README.md Co-authored-by: Jeannie Finks <[email protected]> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <[email protected]> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <[email protected]> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <[email protected]> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <[email protected]> * Update docs/source/recipes.md Co-authored-by: Jeannie Finks <[email protected]> * fix links in index.rst from reviewed content * update overviews and taglines from doc Co-authored-by: Jeannie Finks <[email protected]> * blog style readme for torch sparse-quant TL notebook (#85) Co-authored-by: Jeannie Finks (NM) <[email protected]> Co-authored-by: Benjamin Fineran <[email protected]> Co-authored-by: Eldar Kurtic <[email protected]> Co-authored-by: Tuan Nguyen <[email protected]>
1 parent 7c24b40 commit edcce80

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+3231
-202
lines changed

Makefile

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
.PHONY: build docs test
22

33
BUILDDIR := $(PWD)
4-
CHECKDIRS := examples notebooks scripts src tests utils setup.py
5-
CHECKGLOBS := 'examples/**/*.py' 'scripts/**/*.py' 'src/**/*.py' 'tests/**/*.py' 'utils/**/*.py' setup.py
4+
CHECKDIRS := examples integrations notebooks scripts src tests utils setup.py
5+
CHECKGLOBS := 'examples/**/*.py' 'integrations/**/*.py' 'scripts/**/*.py' 'src/**/*.py' 'tests/**/*.py' 'utils/**/*.py' setup.py
66
DOCDIR := docs
7-
MDCHECKGLOBS := 'docs/**/*.md' 'docs/**/*.rst' 'examples/**/*.md' 'notebooks/**/*.md' 'scripts/**/*.md'
7+
MDCHECKGLOBS := 'docs/**/*.md' 'docs/**/*.rst' 'examples/**/*.md' 'integrations/**/*.md' 'notebooks/**/*.md' 'scripts/**/*.md'
88
MDCHECKFILES := CODE_OF_CONDUCT.md CONTRIBUTING.md DEVELOPING.md README.md
99

10-
TARGETS := "" # targets for running pytests: keras,onnx,pytorch,pytorch_models,pytorch_datasets,tensorflow_v1,tensorflow_v1_datasets
10+
BUILD_ARGS := # set nightly to build nightly release
11+
TARGETS := "" # targets for running pytests: keras,onnx,pytorch,pytorch_models,pytorch_datasets,tensorflow_v1,tensorflow_v1_models,tensorflow_v1_datasets
1112
PYTEST_ARGS := ""
1213
ifneq ($(findstring keras,$(TARGETS)),keras)
1314
PYTEST_ARGS := $(PYTEST_ARGS) --ignore tests/sparseml/keras
@@ -63,7 +64,7 @@ docs:
6364

6465
# creates wheel file
6566
build:
66-
python3 setup.py sdist bdist_wheel
67+
python3 setup.py sdist bdist_wheel $(BUILD_ARGS)
6768

6869
# clean package
6970
clean:

README.md

Lines changed: 44 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,11 @@ limitations under the License.
1616

1717
# ![icon for SparseMl](https://raw.githubusercontent.com/neuralmagic/sparseml/main/docs/source/icon-sparseml.png) SparseML
1818

19-
### Libraries for state-of-the-art deep neural network optimization algorithms, enabling simple pipelines integration with a few lines of code
19+
### Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
2020

2121
<p>
2222
<a href="https://github.com/neuralmagic/sparseml/blob/main/LICENSE">
23-
<img alt="GitHub" src="https://img.shields.io/github/license/neuralmagic/comingsoon.svg?color=purple&style=for-the-badge" height=25>
23+
<img alt="GitHub" src="https://img.shields.io/github/license/neuralmagic/sparseml.svg?color=purple&style=for-the-badge" height=25>
2424
</a>
2525
<a href="https://docs.neuralmagic.com/sparseml/">
2626
<img alt="Documentation" src="https://img.shields.io/website/http/docs.neuralmagic.com/sparseml/index.html.svg?down_color=red&down_message=offline&up_message=online&style=for-the-badge" height=25>
@@ -44,25 +44,37 @@ limitations under the License.
4444

4545
## Overview
4646

47-
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art optimization algorithms such as [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to any neural network. General, recipe-driven approaches built around these optimizations enable the simplification of creating faster and smaller models for the ML performance community at large.
47+
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network.
48+
General, recipe-driven approaches built around these algorithms enable the simplification of creating faster and smaller models for the ML performance community at large.
4849

49-
SparseML is integrated for easy model optimizations within the [PyTorch](https://pytorch.org/),
50-
[Keras](https://keras.io/), and [TensorFlow V1](http://tensorflow.org/) ecosystems currently.
50+
This repository contains integrations within the [PyTorch](https://pytorch.org/), [Keras](https://keras.io/), and [TensorFlow V1](http://tensorflow.org/) ecosystems, allowing for seamless model sparsification.
5151

52-
### Related Products
52+
## Sparsification
5353

54-
- [DeepSparse](https://github.com/neuralmagic/deepsparse): CPU inference engine that delivers unprecedented performance for sparse models
55-
- [SparseZoo](https://github.com/neuralmagic/sparsezoo): Neural network model repository for highly sparse models and optimization recipes
56-
- [Sparsify](https://github.com/neuralmagic/sparsify): Easy-to-use autoML interface to optimize deep neural networks for better inference performance and a smaller footprint
54+
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
55+
Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308).
56+
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
57+
For example, pruning plus quantization can give over [7x improvements in performance](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) while recovering to nearly the same baseline accuracy.
58+
59+
The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
60+
Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
61+
- Download a sparsification recipe and sparsified model from the [SparseZoo](https://github.com/neuralmagic/sparsezoo).
62+
- Alternatively, create a recipe for your model using [Sparsify](https://github.com/neuralmagic/sparsify).
63+
- Apply your recipe with only a few lines of code using [SparseML](https://github.com/neuralmagic/sparseml).
64+
- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the [DeepSparse Engine](https://github.com/neuralmagic/deepsparse).
65+
66+
67+
**Full Deep Sparse product flow:**
68+
69+
<img src="https://docs.neuralmagic.com/docs/source/sparsification/flow-overview.svg" width="960px">
5770

5871
## Quick Tour
5972

60-
To enable flexibility, ease of use, and repeatability, optimizing a model is generally done using a recipe file.
61-
The files encode the instructions needed for modifying the model and/or training process as a list of modifiers.
73+
To enable flexibility, ease of use, and repeatability, sparsifying a model is generally done using a recipe.
74+
The recipes encode the instructions needed for modifying the model and/or training process as a list of modifiers.
6275
Example modifiers can be anything from setting the learning rate for the optimizer to gradual magnitude pruning.
6376
The files are written in [YAML](https://yaml.org/) and stored in YAML or [markdown](https://www.markdownguide.org/) files using [YAML front matter](https://assemble.io/docs/YAML-front-matter.html).
64-
The rest of the SparseML system is coded to parse the recipe files into a native format for the desired framework
65-
and apply the modifications to the model and training pipeline.
77+
The rest of the SparseML system is coded to parse the recipes into a native format for the desired framework and apply the modifications to the model and training pipeline.
6678

6779
A sample recipe for pruning a model generally looks like the following:
6880

@@ -91,18 +103,21 @@ modifiers:
91103
params: ['sections.0.0.conv1.weight', 'sections.0.0.conv2.weight', 'sections.0.0.conv3.weight']
92104
```
93105
94-
More information on the available recipes, formats, and arguments can be found [here](https://github.com/neuralmagic/sparseml/blob/main/docs/optimization-recipes.md). Additionally, all code implementations of the modifiers under the `optim` packages for the frameworks are documented with example YAML formats.
106+
More information on the available recipes, formats, and arguments can be found [here](https://github.com/neuralmagic/sparseml/blob/main/docs/source/recipes.md). Additionally, all code implementations of the modifiers under the `optim` packages for the frameworks are documented with example YAML formats.
95107

96108
Pre-configured recipes and the resulting models can be explored and downloaded from the [SparseZoo](https://github.com/neuralmagic/sparsezoo). Also, [Sparsify](https://github.com/neuralmagic/sparsify) enables autoML style creation of optimization recipes for use with SparseML.
97109

98110
For a more in-depth read, check out [SparseML documentation](https://docs.neuralmagic.com/sparseml/).
99111

100-
### PyTorch Optimization
112+
### PyTorch Sparsification
101113

102-
The PyTorch optimization libraries are located under the `sparseml.pytorch.optim` package.
103-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into PyTorch training pipelines.
114+
The PyTorch sparsification libraries are located under the `sparseml.pytorch.optim` package.
115+
Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into PyTorch training pipelines.
104116

105-
The integration is done using the `ScheduledOptimizer` class. It is intended to wrap your current optimizer and its step function. The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file. With this setup, the training process can then be modified as desired to optimize the model.
117+
The integration is done using the `ScheduledOptimizer` class.
118+
It is intended to wrap your current optimizer and its step function.
119+
The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file.
120+
With this setup, the training process can then be modified as desired to sparsify the model.
106121

107122
To enable all of this, the integration code you'll need to write is only a handful of lines:
108123

@@ -121,11 +136,11 @@ optimizer = ScheduledOptimizer(optimizer, model, manager, steps_per_epoch=num_tr
121136

122137
### Keras Optimization
123138

124-
The Keras optimization libraries are located under the `sparseml.keras.optim` package.
125-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into Keras training pipelines.
139+
The Keras sparsification libraries are located under the `sparseml.keras.optim` package.
140+
Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into Keras training pipelines.
126141

127142
The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
128-
This class handles modifying the Keras objects for the desired optimizations using the `modify` method.
143+
This class handles modifying the Keras objects for the desired algorithms using the `modify` method.
129144
The edited model, optimizer, and any callbacks necessary to modify the training process are returned.
130145
The model and optimizer can be used normally and the callbacks must be passed into the `fit` or `fit_generator` function.
131146
If using `train_on_batch`, the callbacks must be invoked after each call.
@@ -155,13 +170,14 @@ model.fit(..., callbacks=callbacks)
155170
save_model = manager.finalize(model)
156171
```
157172

158-
### TensorFlow V1 Optimization
173+
### TensorFlow V1 Sparsification
159174

160-
The TensorFlow optimization libraries for TensorFlow version 1.X are located under the `sparseml.tensorflow_v1.optim` package. Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.
175+
The TensorFlow sparsification libraries for TensorFlow version 1.X are located under the `sparseml.tensorflow_v1.optim` package.
176+
Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.
161177

162178
The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
163-
This class handles modifying the TensorFlow graph for the desired optimizations.
164-
With this setup, the training process can then be modified as desired to optimize the model.
179+
This class handles modifying the TensorFlow graph for the desired algorithms.
180+
With this setup, the training process can then be modified as desired to sparsify the model.
165181

166182
#### Estimator-Based pipelines
167183

@@ -185,7 +201,7 @@ manager.modify_estimator(estimator, steps_per_epoch=num_train_batches)
185201
Session-based pipelines need a little bit more as compared to estimator-based pipelines; however,
186202
it is still designed to require only a few lines of code for integration.
187203
After graph creation, the manager's `create_ops` method must be called.
188-
This will modify the graph as needed for the optimizations and return modifying ops and extras.
204+
This will modify the graph as needed for the algorithms and return modifying ops and extras.
189205
After creating the session and training normally, call into `session.run` with the modifying ops after each step.
190206
Modifying extras contain objects such as tensorboard summaries of the modifiers to be used if desired.
191207
Finally, once completed, `complete_graph` must be called to remove the modifying ops for saving and export.
@@ -289,7 +305,7 @@ Install with pip using:
289305
pip install sparseml
290306
```
291307

292-
Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparseml/blob/main/scripts/), [notebooks](https://github.com/neuralmagic/sparseml/blob/main/notebooks/), or [examples](https://github.com/neuralmagic/sparseml/blob/main/examples/)
308+
Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparseml/blob/main/scripts/), [notebooks](https://github.com/neuralmagic/sparseml/blob/main/notebooks/), or [integrations](https://github.com/neuralmagic/sparseml/blob/main/integrations/)
293309
clone the repository and install any additional dependencies as required.
294310

295311
#### Supported Framework Versions
@@ -343,7 +359,7 @@ Note, TensorFlow V1 is no longer being built for newer operating systems such as
343359

344360
## Contributing
345361

346-
We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparseml/blob/main/CONTRIBUTING.md).
362+
We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparseml/blob/main/CONTRIBUTING.md).
347363

348364
## Join the Community
349365

docs/source/conf.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,11 @@
8686
html_theme = "sphinx_rtd_theme"
8787
html_logo = "icon-sparseml.png"
8888

89+
html_theme_options = {
90+
'analytics_id': 'UA-128364174-1', # Provided by Google in your dashboard
91+
'analytics_anonymize_ip': False,
92+
}
93+
8994
# Add any paths that contain custom static files (such as style sheets) here,
9095
# relative to this directory. They are copied after the builtin static files,
9196
# so a file named "default.css" will overwrite the builtin "default.css".

0 commit comments

Comments
 (0)