Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit bb08a95

Browse files
authored
Broken Links FIx/Misc Typos/Omissions (#107)
Corrected broken links, grammar, typos: * Update deepsparse.mdx * Update custom-integrations.mdx * Update deploying.mdx * Update diagnotistics-debugging.mdx * Update sparsezoo.mdx * Update sparseml.mdx * Update deepsparse-ent.mdx * Update sparsezoo.mdx updating missing sparsezoo links that were broken; pointing to GitHub for now until next gen SparseZoo design is more realized meta description not filled in correctly * Update gatsby-node.js old supported hardware URL heavily referenced
1 parent 84de0f8 commit bb08a95

File tree

8 files changed

+42
-40
lines changed

8 files changed

+42
-40
lines changed

gatsby-node.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ exports.createPages = ({ graphql, actions }) => {
4444
});
4545

4646
// create redirect pages
47+
createRedirect({ fromPath: '/deepsparse/source/hardware.html', toPath: '/user-guide/deepsparse-engine/hardware-support', redirectInBrowser: true, isPermanent: true });
4748
createRedirect({ fromPath: '/deepsparse', toPath: '/products/deepsparse', redirectInBrowser: true, isPermanent: true });
4849
createRedirect({ fromPath: '/sparseml', toPath: '/products/sparseml', redirectInBrowser: true, isPermanent: true });
4950
createRedirect({ fromPath: '/sparsezoo', toPath: '/products/sparsezoo', redirectInBrowser: true, isPermanent: true });

src/content/get-started/sparsify-a-model/custom-integrations.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ index: 2000
99
# Creating a Custom Integration for Sparsifying Models
1010

1111
This page explains how to apply a recipe to a custom model. For more details on the concepts of pruning/quantization
12-
as well as how to create recipes, see the [prior page](/get-started/transfer-a-sparsified-model).
12+
as well as how to create recipes, see [Sparsifying a Model for SparseML Integrations](/get-started/sparsify-a-model/supported-integrations).
1313

1414
In addition to supported integrations described on the prior page, SparseML is set to enable easy integration in custom training pipelines.
1515
This flexibility enables easy sparsification for any neural network architecture for custom models and use cases. Once SparseML is installed,
@@ -56,7 +56,7 @@ from sparseml.pytorch.datasets import ImagenetteDataset, ImagenetteSize
5656
from sparseml.pytorch.optim import ScheduledModifierManager
5757

5858
# Model creation
59-
NUM_CLASSES = 10 # number of imagenette classes
59+
NUM_CLASSES = 10 # number of Imagenette classes
6060
model = resnet50(pretrained=True, num_classes=NUM_CLASSES)
6161

6262
# Dataset creation
@@ -72,7 +72,7 @@ model.to(device)
7272
criterion = CrossEntropyLoss()
7373
optimizer = SGD(model.parameters(), lr=10e-6, momentum=0.9)
7474

75-
# Recipe - in this case, we pull down a recipe from the SparseZoo for ResNet50
75+
# Recipe - in this case, we pull down a recipe from the SparseZoo for ResNet-50
7676
# This can be a be a path to a local file
7777
recipe_path = "zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_quant-none?recipe_type=original"
7878

@@ -106,8 +106,8 @@ manager.finalize(model)
106106

107107
## Create a Recipe
108108

109-
The recipe used in the supported integrations page will also work for custom integrations.
110-
To dive into the details of this recipe and how to edit it, visit [the prior page](/get-started/supported-integrations).
109+
<!-- The recipe used in the [Supported Integrations page](get-started/sparsify-a-model/supported-integrationsget-started/sparsify-a-model/supported-integrationsget-started/sparsify-a-model/supported-integrations) will also work for custom integrations. -->
110+
To dive into the details of this recipe and how to edit it, visit [Supported Integrations](/get-started/sparsify-a-model/supported-integrations).
111111
The resulting recipe is included here for easy integration and testing.
112112

113113
```yaml

src/content/products/deepsparse-ent.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ index: 2000
4343
</div>
4444
</div>
4545

46-
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read more about sparsification [here](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification).
46+
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read [more about sparsification](user-guide/deepsparse-engine/hardware-support).
4747

4848
Neural Magic's DeepSparse Engine is able to integrate into popular deep learning libraries (e.g., Hugging Face, Ultralytics) allowing you to leverage DeepSparse for loading and deploying sparse models with ONNX.
4949
ONNX gives the flexibility to serve your model in a framework-agnostic environment.
@@ -61,7 +61,7 @@ The DeepSparse Engine is available in two editions:
6161

6262
## 🧰 Hardware Support and System Requirements
6363

64-
Review [CPU Hardware Support for Various Architectures](https://docs.neuralmagic.com/deepsparse/source/hardware.html) to understand system requirements.
64+
Review [Supported Hardware for the DeepSparse Engine](user-guide/deepsparse-engine/hardware-support) to understand system requirements.
6565
The DeepSparse Engine works natively on Linux; Mac and Windows require running Linux in a Docker or virtual machine; it will not run natively on those operating systems.
6666

6767
The DeepSparse Engine is tested on Python 3.7-3.10, ONNX 1.5.0-1.12.0, ONNX opset version 11+, and manylinux compliant.
@@ -75,7 +75,7 @@ Install the Enterprise Edition as follows:
7575
pip install deepsparse-ent
7676
```
7777

78-
See the [DeepSparse Enterprise Installation Page](https://docs.neuralmagic.com/get-started/install/deepsparse-ent) for further installation options.
78+
See the [DeepSparse Enterprise Installation Page](/get-started/install/deepsparse-ent) for further installation options.
7979

8080
## Getting a License
8181

src/content/products/deepsparse.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ index: 1000
4343
</div>
4444
</div>
4545

46-
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read more about sparsification [here](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification).
46+
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read [more about sparsification](https://docs.neuralmagic.com/user-guide/sparsification).
4747

4848
Neural Magic's DeepSparse Engine is able to integrate into popular deep learning libraries (e.g., Hugging Face, Ultralytics) allowing you to leverage DeepSparse for loading and deploying sparse models with ONNX.
4949
ONNX gives the flexibility to serve your model in a framework-agnostic environment.
@@ -61,7 +61,7 @@ The DeepSparse Engine is available in two editions:
6161

6262
## 🧰 Hardware Support and System Requirements
6363

64-
Review [CPU Hardware Support for Various Architectures](https://docs.neuralmagic.com/deepsparse/source/hardware.html) to understand system requirements.
64+
Review [Supported Hardware for the DeepSparse Engine](https://docs.neuralmagic.com/user-guide/deepsparse-engine/hardware-support) to understand system requirements.
6565
The DeepSparse Engine works natively on Linux; Mac and Windows require running Linux in a Docker or virtual machine; it will not run natively on those operating systems.
6666

6767
The DeepSparse Engine is tested on Python 3.7-3.10, ONNX 1.5.0-1.12.0, ONNX opset version 11+, and manylinux compliant.
@@ -77,7 +77,7 @@ pip install deepsparse
7777

7878
See the [DeepSparse Community Installation Page](https://docs.neuralmagic.com/get-started/install/deepsparse) for further installation options.
7979

80-
To trial or inquire about licensing for DeepSparse Enterprise Edition, see the [DeepSparse Enterprise documentation](https://docs.neuralmagic.com/products/deepsparse-enterprise).
80+
To trial or inquire about licensing for DeepSparse Enterprise Edition, see the [DeepSparse Enterprise documentation](https://docs.neuralmagic.com/products/deepsparse-ent).
8181

8282
## Features
8383

src/content/products/sparseml.mdx

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that enabl
5050
SparseML provides two options to accomplish this goal:
5151
- **Sparse Transfer Learning**: Fine-tune state-of-the-art pre-sparsified models from the SparseZoo onto your dataset while preserving sparsity.
5252

53-
- **Sparsifying from Scratch**: Apply state-of-the-art [sparsification](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification) algorithms such as pruning and quantization to any neural network.
53+
- **Sparsifying from Scratch**: Apply state-of-the-art [sparsification](/user-guide/sparsification) algorithms such as pruning and quantization to any neural network.
5454

5555
These options are useful for different situations:
5656
- **Sparse Transfer Learning** is the easiest path to creating a sparse model trained on your data. Pull down a sparse model from SparseZoo and point our training scripts at your data without any hyperparameter search. This is the recommended pathway for supported use cases like Image Classification, Object Detection, and several NLP tasks.
@@ -59,58 +59,59 @@ These options are useful for different situations:
5959

6060
Each of these avenues use YAML-based **recipes** that simplify integration with popular deep learning libraries and framrworks.
6161

62-
<img alt="SparseML Flow" src="https://docs.neuralmagic.com/docs/source/infographics/sparseml.png" width="100%" style={{maxWidth: "100%"}} />
63-
62+
<img src="https://docs.neuralmagic.com/docs/source/infographics/sparseml.png" alt="SparseML Flow" />
63+
64+
6465
## Highlights
6566

6667
### Integrations
6768

6869
<p>
6970
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/pytorch">
70-
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/pytorch-torchvision.png" width="136px" />
71+
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/pytorch-torchvision.png" alt="Integration - PyTorch: MobileNetV1, ResNet-50" width="136px" />
7172
</a>
7273
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3">
73-
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/ultralytics-yolov3.png" width="136px" />
74+
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/ultralytics-yolov3.png" alt="Integration - Ultralytics: YOLOv3" width="136px" />
7475
</a>
7576
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov5">
76-
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/ultralytics-yolov5.png" width="136px" />
77+
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/ultralytics-yolov5.png" alt="Integration - Ultralytics: YOLOv5" width="136px" />
7778
</a>
7879
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/huggingface-transformers">
79-
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/huggingface-transformers.png" width="136px" />
80+
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/huggingface-transformers.png" alt="Integration - Hugging Face: BERT" width="136px" />
8081
</a>
8182
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/rwightman-timm">
82-
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/rwightman-timm.png" width="136px" />
83+
<img src="https://docs.neuralmagic.com/docs/source/highlights/sparseml/rwightman-timm.png" alt="Integration - rwightman: ResNet-50" width="136px" />
8384
</a>
8485
</p>
8586

8687
### Creating Sparse Models
8788

8889
<p>
8990
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/pytorch/notebooks/classification.ipynb">
90-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/classification_resnet-50.png" width="136px" />
91+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/classification_resnet-50.png" alt="Creating Sparse ResNet-50" width="136px" />
9192
</a>
9293
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3/tutorials/sparsifying_yolov3_using_recipes.md">
93-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov3.png" width="136px" />
94+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov3.png" alt="Creating Sparse YOLOv3" width="136px" />
9495
</a>
9596
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md">
96-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov5.png" width="136px" />
97+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov5.png" alt="Creating Sparse YOLOv5" width="136px" />
9798
</a>
9899
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/huggingface-transformers/tutorials/sparsifying_bert_using_recipes.md">
99-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/nlp_bert.png" width="136px" />
100+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/nlp_bert.png" alt="Creating Sparse BERT" width="136px" />
100101
</a>
101102
</p>
102103

103104
### Transfer Learning from Sparse Models
104105

105106
<p>
106107
<a href="https://github.com/neuralmagic/sparseml/tree/main/integrations/pytorch/notebooks/sparse_quantized_transfer_learning.ipynb">
107-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/classification_resnet-50.png" width="136px" />
108+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/classification_resnet-50.png" alt="Transfer Learn - ResNet-50" width="136px" />
108109
</a>
109110
<a href="https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/tutorials/yolov3_sparse_transfer_learning.md">
110-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov3.png" width="136px" />
111+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov3.png" alt="Transfer Learn - YOLOv3" width="136px" />
111112
</a>
112113
<a href="https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md">
113-
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov5.png" width="136px" />
114+
<img src="https://docs.neuralmagic.com/docs/source/tutorials/detection_yolov5.png" alt="Transfer Learn - YOLOv5" width="136px" />
114115
</a>
115116
</p>
116117

@@ -188,7 +189,7 @@ $ sparseml.yolov5.train \
188189
--recipe zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/pruned_quant-aggressive_95
189190
```
190191

191-
For more details on the above as well as examples for more supported use cases see [here](/use-cases/natural-language-processing/question-answering).
192+
[See more details](/use-cases/natural-language-processing/question-answering) on the above as well as examples for more supported use cases.
192193

193194
#### For Custom Use Cases / Supported Use Cases: Python Integration
194195

src/content/products/sparsezoo.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: "SparseZoo"
33
metaTitle: "SparseZoo"
4-
metaDescription: "SparseZoo"
4+
metaDescription: "Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes"
55
githubURL: "Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes"
66
index: 4000
77
---
@@ -47,7 +47,7 @@ index: 4000
4747

4848
[SparseZoo is a constantly-growing repository](https://sparsezoo.neuralmagic.com) of sparsified (pruned and pruned-quantized) models with matching sparsification recipes for neural networks.
4949
It simplifies and accelerates your time-to-value in building performant deep learning models with a collection of inference-optimized models and recipes to prototype from.
50-
Read more about sparsification [here.](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification)
50+
Read [more about sparsification.](/user-guide/sparsification)
5151

5252
Available via API and hosted in the cloud, the SparseZoo contains both baseline and models sparsified to different degrees of inference performance vs. baseline loss recovery.
5353
Recipe-driven approaches built around sparsification algorithms allow you to use the models as given, transfer-learn from the models onto private datasets, or transfer the recipes to your architectures.
@@ -58,8 +58,8 @@ The [GitHub repository](https://github.com/neuralmagic/sparsezoo) contains the P
5858

5959
## Highlights
6060

61-
- [Model Stub Architecture Overview](https://docs.neuralmagic.com/sparsezoo/source/models.html)
62-
- [Available Model Recipes](https://docs.neuralmagic.com/sparsezoo/source/recipes.html)
61+
- [Model Stub Architecture Overview](https://github.com/neuralmagic/sparsezoo/blob/main/docs/source/models.md)
62+
- [Available Model Recipes](https://github.com/neuralmagic/sparsezoo/blob/main/docs/source/recipes.md)
6363
- [sparsezoo.neuralmagic.com](https://sparsezoo.neuralmagic.com)
6464

6565
## Installation

src/content/use-cases/image-classification/deploying.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ This section requires the [DeepSparse Server Install](/get-started/install/deeps
2020
## Getting Started
2121

2222
Before you start using the DeepSparse Engine, confirm your machine is
23-
compatible with our [hardware requirements](https://docs.neuralmagic.com/deepsparse/source/hardware.html).
23+
compatible with our [hardware requirements](/user-guide/deepsparse-engine/hardware-support).
2424

2525
### Model Format
2626

@@ -71,7 +71,7 @@ The examples below use option 2. However, you can pass the local path to the ONN
7171
DeepSparse provides both a Python `Pipeline` API and an out-of-the-box model
7272
server that can be used for end-to-end inference in either Python
7373
workflows or as an HTTP endpoint. Both options provide similar specifications
74-
for configurations and support a variety of Image Classification models.
74+
for configurations and support a variety of image classification models.
7575

7676
### Python API
7777

@@ -94,10 +94,10 @@ serve ONNX models and pipelines in HTTP. Configuring the server uses the same pa
9494
enabling simple deployment. Once launched, a `/docs` endpoint is created with full
9595
endpoint descriptions and support for making sample requests.
9696

97-
An example deployment using a 95% pruned ResNet50 is given below.
97+
An example deployment using a 95% pruned ResNet-50 is given below.
9898

9999
For full documentation on deploying sparse image classification models with the
100-
DeepSparse Server, see the [documentation for DeepSparse Server](/user-cases/deploying-deepsparse/deepsparse-server).
100+
DeepSparse Server, see the [documentation for DeepSparse Server](/use-cases/deploying-deepsparse/deepsparse-server).
101101

102102
## Deployment Examples
103103

@@ -107,8 +107,8 @@ but a local path to an ONNX file can also be passed as the `model_path`.
107107

108108
### Python API
109109

110-
Create a `Pipeline` to run inference with the following code. The `Pipeline` handles the pre-processing (e.g. subtracting by ImageNet
111-
means, dividing by ImageNet standard deviation) and post-processing so you can pass an raw image and recieve an class without any extra code.
110+
Create a `Pipeline` to run inference with the following code. The `Pipeline` handles the pre-processing (e.g., subtracting by ImageNet
111+
means, dividing by ImageNet standard deviation) and post-processing so you can pass an raw image and receive an class without any extra code.
112112

113113
```python
114114
from deepsparse import Pipeline

src/content/user-guide/deepsparse-engine/diagnotistics-debugging.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ index: 4000
88

99
# Logging Guidance for Diagnostics and Debugging
1010

11-
This page explains the Diagnostics and Debugging features available in DeepSparse Engine.
11+
This page explains the diagnostics and debugging features available in DeepSparse Engine.
1212

1313
Unlike traditional software, debugging utilities available to the machine learning community are scarce. Complicated with deployment pipeline design issues, model weights, model architecture, and unoptimized models, debugging performance issues can be very dynamic in your data science ecosystem. Reviewing a log file can be your first line of defense in pinpointing performance issues with optimizing your inference.
1414

@@ -69,7 +69,7 @@ By default, logs will print out to the stderr of your process. If you would like
6969

7070
## Parsing an Example Log
7171

72-
If you want to see an example log with `NM_LOGGING_LEVEL=diagnose`, a [truncated sample output](example-log.md) is provided at the end of this guide. It will show a super_resolution network, where Neural Magic only supports running 70% of it.
72+
If you want to see an example log with `NM_LOGGING_LEVEL=diagnose`, a truncated sample output is provided at the end of this guide. It will show a super_resolution network, where Neural Magic only supports running 70% of it.
7373

7474
_Different portions of the log are explained below._
7575

@@ -186,7 +186,7 @@ Locating `== NM Execution Provider supports` shows how many subgraphs we compil
186186
187187
### Full Example Log, Verbose Level = diagnose
188188
189-
The following is an example log with `NM_LOGGING_LEVEL=diagnose` running a super_resolution network, where we only support running 70% of it. Different portions of the log are explained in [Parsing an Example Log.](diagnostics-debugging.md#parsing-an-example-log)
189+
The following is an example log with `NM_LOGGING_LEVEL=diagnose` running a super_resolution network, where we only support running 70% of it. Different portions of the log are explained in [Parsing an Example Log.](/user-guide/deepsparse-engine/diagnotistics-debugging#parsing-an-example-log)
190190
191191
```text
192192
onnx_filename : test-models/cv-resolution/super_resolution/none-bsd300-onnx-repo/model.onnx

0 commit comments

Comments
 (0)