Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 068cd3b

Browse files
committed
Fix recipes improperly formatted for getting started sparsify
1 parent d7fb579 commit 068cd3b

File tree

3 files changed

+31
-32
lines changed

3 files changed

+31
-32
lines changed

src/content/get-started/sparsify-a-model/custom-integrations.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,18 +8,18 @@ index: 2000
88

99
# Creating a Custom Integration for Sparsifying Models
1010

11-
This page explains how to apply a recipe to a custom model. For more details on the concepts of pruning/quantization
11+
This page explains how to apply a recipe to a custom model. For more details on the concepts of pruning/quantization
1212
as well as how to create recipes, see the [prior page](/get-started/transfer-a-sparsified-model).
1313

14-
In addition to supported integrations described on the prior page, SparseML is set to enable easy integration in custom training pipelines.
15-
This flexibility enables easy sparsification for any neural network architecture for custom models and use cases. Once SparseML is installed,
14+
In addition to supported integrations described on the prior page, SparseML is set to enable easy integration in custom training pipelines.
15+
This flexibility enables easy sparsification for any neural network architecture for custom models and use cases. Once SparseML is installed,
1616
the necessary code can be plugged into most PyTorch/Keras training pipelines with only a few lines of code.
1717

1818
## Integrate SparseML
1919

2020
To enable sparsification of models with recipes, a few edits to the training pipeline code need to be made.
2121
Specifically, a `ScheduledModifierManager` instance is used to take over and inject the desired sparsification algorithms into the training process.
22-
To do this properly in PyTorch, the `ScheduledModifierManager` requires the instance of the `model` to modify, the `optimizer` used for training,
22+
To do this properly in PyTorch, the `ScheduledModifierManager` requires the instance of the `model` to modify, the `optimizer` used for training,
2323
and the number of `steps_per_epoch` to ensure algorithms are applied at the right time.
2424

2525
For the integration, the following code illustrates all that is needed:
@@ -102,35 +102,35 @@ The resulting recipe is included here for easy integration and testing.
102102

103103
```yaml
104104
modifiers:
105-
!GlobalMagnitudePruningModifier
105+
- !GlobalMagnitudePruningModifier
106106
init_sparsity: 0.05
107107
final_sparsity: 0.8
108108
start_epoch: 0.0
109109
end_epoch: 30.0
110110
update_frequency: 1.0
111111
params: __ALL_PRUNABLE__
112112

113-
!SetLearningRateModifier
113+
- !SetLearningRateModifier
114114
start_epoch: 0.0
115115
learning_rate: 0.05
116116

117-
!LearningRateFunctionModifier
117+
- !LearningRateFunctionModifier
118118
start_epoch: 30.0
119119
end_epoch: 50.0
120120
lr_func: cosine
121121
init_lr: 0.05
122122
final_lr: 0.001
123123

124-
!QuantizationModifier
124+
- !QuantizationModifier
125125
start_epoch: 50.0
126126
submodules: ['model']
127127
freeze_bn_stats_epoch: 3.0
128128

129-
!SetLearningRateModifier
129+
- !SetLearningRateModifier
130130
start_epoch: 50.0
131131
learning_rate: 10e-6
132132

133-
!EpochRangeModifier
133+
- !EpochRangeModifier
134134
start_epoch: 0.0
135135
end_epoch: 55.0
136136
```

src/content/get-started/sparsify-a-model/supported-integrations.mdx

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ index: 1000
1010

1111
This page walks through an example of creating a sparsification recipe to prune a dense model from scratch and applying a recipe to a supported integration.
1212

13-
SparseML has pre-made integrations with many popular model repositories, such as with HuggingFace Transformers and Ultralytics YOLOv5.
14-
For these integrations, a sparsification recipe is all you need, and you can apply state-of-the-art sparsification algorithms, including
13+
SparseML has pre-made integrations with many popular model repositories, such as with HuggingFace Transformers and Ultralytics YOLOv5.
14+
For these integrations, a sparsification recipe is all you need, and you can apply state-of-the-art sparsification algorithms, including
1515
pruning, distillation, and quantization, with a single command line call.
1616

1717
## Pruning and Pruning Recipes
1818

19-
Pruning is a systematic way of removing redundant weights and connections within a neural network. An applied pruning algorithm must determine which
19+
Pruning is a systematic way of removing redundant weights and connections within a neural network. An applied pruning algorithm must determine which
2020
weights are redundant and will not affect the accuracy.
2121

2222
A standard algorithm for pruning is gradual magnitude pruning, or GMP for short.
@@ -41,29 +41,29 @@ The following are reasonably default values to start with:
4141

4242
SparseML conveniently encodes these hyperparameters into a YAML-based **Recipe** file. The rest of the system parses the arguments in the YAML file to set the parameters of the algorithm.
4343

44-
For example, the following `recipe.yaml` file for the default values listed above:
44+
For example, the following `recipe.yaml` file for the default values listed above:
4545
```yaml
4646
modifiers:
47-
!GlobalMagnitudePruningModifier
47+
- !GlobalMagnitudePruningModifier
4848
init_sparsity: 0.05
4949
final_sparsity: 0.8
5050
start_epoch: 0.0
5151
end_epoch: 30.0
5252
update_frequency: 1.0
5353
params: __ALL_PRUNABLE__
5454

55-
!SetLearningRateModifier
55+
- !SetLearningRateModifier
5656
start_epoch: 0.0
5757
learning_rate: 0.05
5858

59-
!LearningRateFunctionModifier
59+
- !LearningRateFunctionModifier
6060
start_epoch: 30.0
6161
end_epoch: 50.0
6262
lr_func: cosine
6363
init_lr: 0.05
6464
final_lr: 0.001
6565

66-
!EpochRangeModifier
66+
- !EpochRangeModifier
6767
start_epoch: 0.0
6868
end_epoch: 50.0
6969
```
@@ -78,7 +78,7 @@ In this recipe:
7878

7979
## Quantization and Quantization Recipes
8080

81-
A quantization recipe systematically reduces the precision for weights and activations within a neural network, generally from `FP32` to `INT8`. Running a quantized
81+
A quantization recipe systematically reduces the precision for weights and activations within a neural network, generally from `FP32` to `INT8`. Running a quantized
8282
model increases speed and reduces memory consumption while sacrificing very little in terms of accuracy.
8383

8484
Quantization aware training (QAT) is the standard algorithm. With QAT, fake quantization operators are injected into the graph before quantizable nodes for activations, and weights are wrapped with fake quantization operators.
@@ -94,19 +94,19 @@ The following are reasonably good values to start with:
9494
- The number of quantized training epochs is set to 5.
9595
- The batch normalization statistics are frozen at the start of the third epoch.
9696

97-
For example, the following `recipe.yaml` file for the default values listed above:
97+
For example, the following `recipe.yaml` file for the default values listed above:
9898
```yaml
9999
modifiers:
100-
!QuantizationModifier
100+
- !QuantizationModifier
101101
start_epoch: 0.0
102102
submodules: ['model']
103103
freeze_bn_stats_epoch: 3.0
104104
105-
!SetLearningRateModifier
105+
- !SetLearningRateModifier
106106
start_epoch: 0.0
107107
learning_rate: 10e-6
108108
109-
!EpochRangeModifier
109+
- !EpochRangeModifier
110110
start_epoch: 0.0
111111
end_epoch: 5.0
112112
```
@@ -127,35 +127,35 @@ This prevents stability issues from lacking precision when pruning and utilizing
127127
Combining the two previous recipes creates the following new recipe.yaml file:
128128
```yaml
129129
modifiers:
130-
!GlobalMagnitudePruningModifier
130+
- !GlobalMagnitudePruningModifier
131131
init_sparsity: 0.05
132132
final_sparsity: 0.8
133133
start_epoch: 0.0
134134
end_epoch: 30.0
135135
update_frequency: 1.0
136136
params: __ALL_PRUNABLE__
137137
138-
!SetLearningRateModifier
138+
- !SetLearningRateModifier
139139
start_epoch: 0.0
140140
learning_rate: 0.05
141141
142-
!LearningRateFunctionModifier
142+
- !LearningRateFunctionModifier
143143
start_epoch: 30.0
144144
end_epoch: 50.0
145145
lr_func: cosine
146146
init_lr: 0.05
147147
final_lr: 0.001
148148
149-
!QuantizationModifier
149+
- !QuantizationModifier
150150
start_epoch: 50.0
151151
submodules: ['model']
152152
freeze_bn_stats_epoch: 3.0
153153
154-
!SetLearningRateModifier
154+
- !SetLearningRateModifier
155155
start_epoch: 50.0
156156
learning_rate: 10e-6
157157
158-
!EpochRangeModifier
158+
- !EpochRangeModifier
159159
start_epoch: 0.0
160160
end_epoch: 55.0
161161
```
@@ -172,12 +172,12 @@ sparseml.yolov5.train --help
172172
To use the recipe given in the previous section, save it locally as a `recipe.yaml` file.
173173
Next, it can be passed in for the `--recipe` argument in the YOLOv5 train CLI.
174174

175-
By running the following command, you will apply the GMP and QAT algorithms encoded in the recipe to the dense version of YOLOv5s
175+
By running the following command, you will apply the GMP and QAT algorithms encoded in the recipe to the dense version of YOLOv5s
176176
(which is pulled down from the SparseZoo). In this example, the fine-tuning is done onto the COCO dataset.
177177

178178
```bash
179179
sparseml.yolov5.train \
180-
--weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none \
180+
--weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none \
181181
--data coco.yaml \
182182
--hyp data/hyps/hyp.scratch.yaml \
183183
--recipe recipe.yaml

src/layouts/root.jsx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,6 @@ const Root = ({ data, pageContext }) => {
4646
const isDocsPage = data && data.mdx;
4747
const metaTitle = data && data.mdx ? data.mdx.frontmatter.metaTitle : null;
4848
const metaDescription = data && data.mdx ? data.mdx.frontmatter.metaDescription : null;
49-
console.log(pageContext);
5049

5150
return (
5251
<RootDiv>

0 commit comments

Comments
 (0)