You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
Corrected broken links, grammar, typos:
* Update deepsparse.mdx
* Update custom-integrations.mdx
* Update deploying.mdx
* Update diagnotistics-debugging.mdx
* Update sparsezoo.mdx
* Update sparseml.mdx
* Update deepsparse-ent.mdx
* Update sparsezoo.mdx
updating missing sparsezoo links that were broken; pointing to GitHub for now until next gen SparseZoo design is more realized
meta description not filled in correctly
* Update gatsby-node.js
old supported hardware URL heavily referenced
Copy file name to clipboardExpand all lines: src/content/products/deepsparse-ent.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ index: 2000
43
43
</div>
44
44
</div>
45
45
46
-
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read more about sparsification[here](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification).
46
+
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read [more about sparsification](user-guide/deepsparse-engine/hardware-support).
47
47
48
48
Neural Magic's DeepSparse Engine is able to integrate into popular deep learning libraries (e.g., Hugging Face, Ultralytics) allowing you to leverage DeepSparse for loading and deploying sparse models with ONNX.
49
49
ONNX gives the flexibility to serve your model in a framework-agnostic environment.
@@ -61,7 +61,7 @@ The DeepSparse Engine is available in two editions:
61
61
62
62
## 🧰 Hardware Support and System Requirements
63
63
64
-
Review [CPU Hardware Support for Various Architectures](https://docs.neuralmagic.com/deepsparse/source/hardware.html) to understand system requirements.
64
+
Review [Supported Hardware for the DeepSparse Engine](user-guide/deepsparse-engine/hardware-support) to understand system requirements.
65
65
The DeepSparse Engine works natively on Linux; Mac and Windows require running Linux in a Docker or virtual machine; it will not run natively on those operating systems.
66
66
67
67
The DeepSparse Engine is tested on Python 3.7-3.10, ONNX 1.5.0-1.12.0, ONNX opset version 11+, and manylinux compliant.
@@ -75,7 +75,7 @@ Install the Enterprise Edition as follows:
75
75
pip install deepsparse-ent
76
76
```
77
77
78
-
See the [DeepSparse Enterprise Installation Page](https://docs.neuralmagic.com/get-started/install/deepsparse-ent) for further installation options.
78
+
See the [DeepSparse Enterprise Installation Page](/get-started/install/deepsparse-ent) for further installation options.
Copy file name to clipboardExpand all lines: src/content/products/deepsparse.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ index: 1000
43
43
</div>
44
44
</div>
45
45
46
-
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read more about sparsification[here](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification).
46
+
A CPU runtime that takes advantage of sparsity within neural networks to reduce compute. Read [more about sparsification](https://docs.neuralmagic.com/user-guide/sparsification).
47
47
48
48
Neural Magic's DeepSparse Engine is able to integrate into popular deep learning libraries (e.g., Hugging Face, Ultralytics) allowing you to leverage DeepSparse for loading and deploying sparse models with ONNX.
49
49
ONNX gives the flexibility to serve your model in a framework-agnostic environment.
@@ -61,7 +61,7 @@ The DeepSparse Engine is available in two editions:
61
61
62
62
## 🧰 Hardware Support and System Requirements
63
63
64
-
Review [CPU Hardware Support for Various Architectures](https://docs.neuralmagic.com/deepsparse/source/hardware.html) to understand system requirements.
64
+
Review [Supported Hardware for the DeepSparse Engine](https://docs.neuralmagic.com/user-guide/deepsparse-engine/hardware-support) to understand system requirements.
65
65
The DeepSparse Engine works natively on Linux; Mac and Windows require running Linux in a Docker or virtual machine; it will not run natively on those operating systems.
66
66
67
67
The DeepSparse Engine is tested on Python 3.7-3.10, ONNX 1.5.0-1.12.0, ONNX opset version 11+, and manylinux compliant.
@@ -77,7 +77,7 @@ pip install deepsparse
77
77
78
78
See the [DeepSparse Community Installation Page](https://docs.neuralmagic.com/get-started/install/deepsparse) for further installation options.
79
79
80
-
To trial or inquire about licensing for DeepSparse Enterprise Edition, see the [DeepSparse Enterprise documentation](https://docs.neuralmagic.com/products/deepsparse-enterprise).
80
+
To trial or inquire about licensing for DeepSparse Enterprise Edition, see the [DeepSparse Enterprise documentation](https://docs.neuralmagic.com/products/deepsparse-ent).
Copy file name to clipboardExpand all lines: src/content/products/sparseml.mdx
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that enabl
50
50
SparseML provides two options to accomplish this goal:
51
51
-**Sparse Transfer Learning**: Fine-tune state-of-the-art pre-sparsified models from the SparseZoo onto your dataset while preserving sparsity.
52
52
53
-
-**Sparsifying from Scratch**: Apply state-of-the-art [sparsification](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification) algorithms such as pruning and quantization to any neural network.
53
+
-**Sparsifying from Scratch**: Apply state-of-the-art [sparsification](/user-guide/sparsification) algorithms such as pruning and quantization to any neural network.
54
54
55
55
These options are useful for different situations:
56
56
-**Sparse Transfer Learning** is the easiest path to creating a sparse model trained on your data. Pull down a sparse model from SparseZoo and point our training scripts at your data without any hyperparameter search. This is the recommended pathway for supported use cases like Image Classification, Object Detection, and several NLP tasks.
@@ -59,58 +59,59 @@ These options are useful for different situations:
59
59
60
60
Each of these avenues use YAML-based **recipes** that simplify integration with popular deep learning libraries and framrworks.
Copy file name to clipboardExpand all lines: src/content/products/sparsezoo.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: "SparseZoo"
3
3
metaTitle: "SparseZoo"
4
-
metaDescription: "SparseZoo"
4
+
metaDescription: "Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes"
5
5
githubURL: "Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes"
6
6
index: 4000
7
7
---
@@ -47,7 +47,7 @@ index: 4000
47
47
48
48
[SparseZoo is a constantly-growing repository](https://sparsezoo.neuralmagic.com) of sparsified (pruned and pruned-quantized) models with matching sparsification recipes for neural networks.
49
49
It simplifies and accelerates your time-to-value in building performant deep learning models with a collection of inference-optimized models and recipes to prototype from.
50
-
Read more about sparsification[here.](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification)
50
+
Read [more about sparsification.](/user-guide/sparsification)
51
51
52
52
Available via API and hosted in the cloud, the SparseZoo contains both baseline and models sparsified to different degrees of inference performance vs. baseline loss recovery.
53
53
Recipe-driven approaches built around sparsification algorithms allow you to use the models as given, transfer-learn from the models onto private datasets, or transfer the recipes to your architectures.
@@ -58,8 +58,8 @@ The [GitHub repository](https://github.com/neuralmagic/sparsezoo) contains the P
Copy file name to clipboardExpand all lines: src/content/user-guide/deepsparse-engine/diagnotistics-debugging.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ index: 4000
8
8
9
9
# Logging Guidance for Diagnostics and Debugging
10
10
11
-
This page explains the Diagnostics and Debugging features available in DeepSparse Engine.
11
+
This page explains the diagnostics and debugging features available in DeepSparse Engine.
12
12
13
13
Unlike traditional software, debugging utilities available to the machine learning community are scarce. Complicated with deployment pipeline design issues, model weights, model architecture, and unoptimized models, debugging performance issues can be very dynamic in your data science ecosystem. Reviewing a log file can be your first line of defense in pinpointing performance issues with optimizing your inference.
14
14
@@ -69,7 +69,7 @@ By default, logs will print out to the stderr of your process. If you would like
69
69
70
70
## Parsing an Example Log
71
71
72
-
If you want to see an example log with `NM_LOGGING_LEVEL=diagnose`, a [truncated sample output](example-log.md) is provided at the end of this guide. It will show a super_resolution network, where Neural Magic only supports running 70% of it.
72
+
If you want to see an example log with `NM_LOGGING_LEVEL=diagnose`, a truncated sample output is provided at the end of this guide. It will show a super_resolution network, where Neural Magic only supports running 70% of it.
73
73
74
74
_Different portions of the log are explained below._
75
75
@@ -186,7 +186,7 @@ Locating `== NM Execution Provider supports` shows how many subgraphs we compil
186
186
187
187
### Full Example Log, Verbose Level = diagnose
188
188
189
-
The following is an example log with `NM_LOGGING_LEVEL=diagnose` running a super_resolution network, where we only support running 70% of it. Different portions of the log are explained in [Parsing an Example Log.](diagnostics-debugging.md#parsing-an-example-log)
189
+
The following is an example log with `NM_LOGGING_LEVEL=diagnose` running a super_resolution network, where we only support running 70% of it. Different portions of the log are explained in [Parsing an Example Log.](/user-guide/deepsparse-engine/diagnotistics-debugging#parsing-an-example-log)
0 commit comments