Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 813ef4a

Browse files
authored
Beth editorial of home & installation (#128)
Provided editorial comments for the Home and Installation sections.
1 parent 0b811d9 commit 813ef4a

File tree

8 files changed

+80
-80
lines changed

8 files changed

+80
-80
lines changed

src/content/get-started/install.mdx

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,21 @@ index: 0
77

88
# Installation
99

10-
The Deep Sparse Platform is made up of core libraries that are available as Python APIs and CLIs.
10+
The Neural Magic Platform is made up of core libraries that are available as Python APIs and CLIs.
1111
All Python APIs and CLIs are installed through pip utilizing [PyPI](https://pypi.org/user/neuralmagic/).
12-
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html) to encapsulate your local environment.
12+
We recommend you install in a [virtual environment](https://docs.python.org/3/library/venv.html) to encapsulate your local environment.
1313

14-
## Quick Start
14+
## Installing the Neural Magic Platform
1515

16-
To begin using the Deep Sparse Platform, run the following commands which install standard setups for deployment with the [DeepSparse Engine](../../products/deepsparse) and model training/optimization with [SparseML](../../products/sparseml):
16+
To begin using the Neural Magic Platform, run the following command, which installs standard setups for deployment with [DeepSparse](../../products/deepsparse) and model training/optimization with [SparseML](../../products/sparseml):
1717

1818
```bash
1919
pip install deepsparse[server] sparseml[torch,torchvision]
2020
```
2121

22-
## Package Installations
22+
Now, you are ready to install one of the Neural Magic products.
23+
24+
## Installing Products
2325

2426
<LinkCards>
2527
<LinkCard href="./deepsparse" heading="DeepSparse">

src/content/get-started/install/deepsparse-ent.mdx

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,19 +13,15 @@ The engine accepts models in the open-source [ONNX format](https://onnx.ai/), wh
1313
Currently, DeepSparse is tested on Python 3.7-3.10, ONNX 1.5.0-1.10.1, ONNX opset version 11+ and is [manylinux compliant](https://peps.python.org/pep-0513/).
1414
It is limited to Linux systems running on x86 CPU architectures.
1515

16-
The DeepSparse Engine is available in two editions:
17-
1. [**The Community Edition**](/products/deepsparse) is open-source and free for evaluation, research, and non-production use with our [Engine Community License](https://neuralmagic.com/legal/engine-license-agreement/).
18-
2. [**The Enterprise Edition**](/products/deepsparse-ent) requires a Trial License or [can be fully licensed](https://neuralmagic.com/legal/master-software-license-and-service-agreement/) for production, commercial applications.
19-
20-
## General Install
16+
## Installing DeepSparse Enterprise
2117

2218
Use the following command to install with pip:
2319

2420
```bash
2521
pip install deepsparse-ent
2622
```
2723

28-
## Server Install
24+
## Installing the Server
2925

3026
The [DeepSparse Server](/use-cases/deploying-deepsparse/deepsparse-server) allows you to serve models and pipelines through an HTTP interface using the deepsparse.server CLI.
3127
To install, use the following extra option:
@@ -34,7 +30,7 @@ To install, use the following extra option:
3430
pip install deepsparse-ent[server]
3531
```
3632

37-
## YOLO Install
33+
## Installing YOLO
3834

3935
The [Ultralytics YOLOv5](/use-cases/object-detection/deploying) models require extra dependencies for deployment.
4036
To use YOLO models, install with the following extra option:
@@ -43,3 +39,4 @@ To use YOLO models, install with the following extra option:
4339
pip install deepsparse-ent[yolo] # just yolo requirements
4440
pip install deepsparse-ent[yolo,server] # both yolo + server requirements
4541
```
42+

src/content/get-started/install/deepsparse.mdx

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,22 +10,18 @@ index: 1000
1010
The [DeepSparse Engine](/products/deepsparse) enables GPU-class performance on CPUs, leveraging sparsity within models to reduce FLOPs and the unique cache hierarchy on CPUs to reduce memory movement.
1111
The engine accepts models in the open-source [ONNX format](https://onnx.ai/), which are easily created from PyTorch and TensorFlow models.
1212

13-
Currently, DeepSparse is tested on Python 3.7-3.10, ONNX 1.5.0-1.10.1, ONNX opset version 11+ and is [manylinux compliant](https://peps.python.org/pep-0513/).
14-
It is limited to Linux systems running on x86 CPU architectures.
13+
Currently, DeepSparse is tested on Python 3.7-3.10, ONNX 1.5.0-1.10.1, and ONNX opset version 11+. It is [manylinux compliant](https://peps.python.org/pep-0513/).
14+
DeepSparse is limited to Linux systems running on x86 CPU architectures.
1515

16-
The DeepSparse Engine is available in two editions:
17-
1. [**The Community Edition**](/products/deepsparse) is open-source and free for evaluation, research, and non-production use with our [Engine Community License](https://neuralmagic.com/legal/engine-license-agreement/).
18-
2. [**The Enterprise Edition**](/products/deepsparse-ent) requires a Trial License or [can be fully licensed](https://neuralmagic.com/legal/master-software-license-and-service-agreement/) for production, commercial applications.
19-
20-
## General Installation
16+
## Installing DeepSparse Community
2117

2218
Use the following command to install the Community Edition with pip:
2319

2420
```bash
2521
pip install deepsparse
2622
```
2723

28-
## Server Install
24+
## Installing the Server
2925

3026
The [DeepSparse Server](/use-cases/deploying-deepsparse/deepsparse-server) allows you to serve models and pipelines through an HTTP interface using the deepsparse.server CLI.
3127
To install, use the following extra option:
@@ -34,7 +30,7 @@ To install, use the following extra option:
3430
pip install deepsparse[server]
3531
```
3632

37-
## YOLO Install
33+
## Installing YOLO
3834

3935
The [Ultralytics YOLOv5](/use-cases/object-detection/deploying) models require extra dependencies for deployment.
4036
To use YOLO models, install with the following extra option:
@@ -43,3 +39,4 @@ To use YOLO models, install with the following extra option:
4339
pip install deepsparse[yolo] # just yolo requirements
4440
pip install deepsparse[yolo,server] # both yolo + server requirements
4541
```
42+

src/content/get-started/try-a-model.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ index: 2000
1010
DeepSparse Engine supports fast inference on CPUs for sparse and dense models. For sparse models in particular, it achieves GPU-level performance in many use cases.
1111

1212
Around the engine, the DeepSparse package includes various utilities to simplify benchmarking performance and model deployment. For instance:
13-
1. Trained models are passed in the open ONNX file format, enabling easy exporting from common packages like PyTorch, Keras, and TensorFlow.
14-
2. Benchmaking latency and performance is available via a single CLI call, with various arguments to test scenarios.
15-
3. `Pipelines` utilities wrap the model execution with input pre-processing and output post-processing, simplifying deployment and adding functionality like multi-stream, bucketing and dynamic shape.
13+
- Trained models are passed in the open ONNX file format, enabling easy exporting from common packages like PyTorch, Keras, and TensorFlow.
14+
- Benchmaking latency and performance is available via a single CLI call, with various arguments to test scenarios.
15+
- Pipelines utilities wrap the model execution with input pre-processing and output post-processing, simplifying deployment and adding functionality like multi-stream, bucketing, and dynamic shape.
1616

1717
## Use Case Examples
1818

@@ -35,4 +35,4 @@ The examples below walk through use cases leveraging DeepSparse for testing and
3535
## Other Use Cases
3636

3737
More documentation, models, use cases, and examples are continually being added.
38-
If you don't see one you're interested in, search the [DeepSparse Github repo](https://github.com/neuralmagic/deepsparse), the [SparseML Github repo](https://github.com/neuralmagic/sparseml), the [SparseZoo website](https://sparsezoo.neuralmagic.com/), or ask in the [Neural Magic Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
38+
If you don't see one you're interested in, search the [DeepSparse Github repo](https://github.com/neuralmagic/deepsparse), [SparseML Github repo](https://github.com/neuralmagic/sparseml), or [SparseZoo website](https://sparsezoo.neuralmagic.com/). Or, ask in the [Neural Magic Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).

src/content/get-started/try-a-model/custom-use-case.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,19 +9,19 @@ index: 3000
99

1010
This page explains how to run a model on the DeepSparse Engine for a custom task inside a Python API called `Pipelines.`
1111

12-
`Pipelines` wrap key utilities around the DeepSparse Engine for easy testing and deployment.
12+
`Pipelines` wraps key utilities around the DeepSparse Engine for easy testing and deployment.
1313

1414
The DeepSparse Engine supports many operators within ONNX, enabling performance for most models and use cases outside of the ones available on the SparseZoo.
15-
The `CustomTaskPipeline` enables you to wrap your model with custom pre and post-processing functions for simple deployment and benchmarking.
15+
The `CustomTaskPipeline` enables you to wrap your model with custom pre-processing and post-processing functions for simple deployment and benchmarking.
1616
In this way, the simplicity of `Pipelines` is combined with the performance of DeepSparse for arbitrary use cases.
1717

18-
## Install Requirements
18+
## Installation Requirements
1919

20-
This example requires [DeepSparse General Install](/get-started/install/deepsparse) and [SparseML Torchvision Install](/get-started/install/sparseml).
20+
This example requires [DeepSparse General Installation](/get-started/install/deepsparse) and [SparseML Torchvision Installation](/get-started/install/sparseml).
2121

2222
## Model Setup
2323

24-
For custom model deployment, first export your model to the ONNX model format (create a `model.onnx` file).
24+
For custom model deployment, export your model to the ONNX model format (create a `model.onnx` file).
2525
SparseML has available wrappers for ONNX export classes and APIs for a more straightforward export process.
2626
A sample export utilizing this API for a MobileNetV2 TorchVision model is given below.
2727

@@ -41,15 +41,15 @@ Examples for both are given below.
4141

4242
## Inference Pipelines
4343

44-
The `model.onnx` file can be passed into a DeepSparse `CustomTaskPipeline` utilizing the `model_path` argument alongside optional pre and post-processing functions.
44+
The `model.onnx` file can be passed into a DeepSparse `CustomTaskPipeline` utilizing the `model_path` argument alongside optional pre-processing and post-processing functions.
4545

4646
A sample image is downloaded that will be run through the example to test the `Pipeline`.
4747

4848
```bash
4949
wget -O basilica.jpg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/src/deepsparse/yolo/sample_images/basilica.jpg
5050
```
5151

52-
Next, the pre and post-processing functions are defined, and the pipeline enabling the classification of the image file is instantiated:
52+
Next, the pre-processing and post-processing functions are defined, and the pipeline enabling the classification of the image file is instantiated:
5353

5454
```python
5555
from deepsparse.pipelines.custom_pipeline import CustomTaskPipeline
@@ -90,7 +90,7 @@ print(inference)
9090

9191
## Benchmarking
9292

93-
The DeepSparse install includes a benchmark CLI for convenient and easy inference performance benchmarking: `deepsparse.benchmark`.
93+
The DeepSparse installation includes a benchmark CLI for convenient and easy inference performance benchmarking: `deepsparse.benchmark`.
9494
The CLI takes in both SparseZoo stubs or paths to a local `model.onnx` file.
9595

9696
The code below provides an example for benchmarking the previously exported MobileNetV2 model.

src/content/get-started/try-a-model/cv-object-detection.mdx

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,33 +9,33 @@ index: 2000
99

1010
This page explains how to run a trained model on the DeepSparse Engine for Object Detection inside a Python API called `Pipelines.`
1111

12-
`Pipelines` wrap key utilities around the DeepSparse Engine for easy testing and deployment.
12+
`Pipelines` wraps key utilities around the DeepSparse Engine for easy testing and deployment.
1313

14-
The object detection `Pipeline`, for example, wraps a trained model with the proper preprocessing and postprocessing pipelines such as NMS.
15-
This enables the passing of raw images and receiving the bounding boxes from the DeepSparse Engine without any extra effort.
16-
With all of this built on top of the DeepSparse Engine, the simplicity of `Pipelines` is combined with GPU-class performance on CPUs for sparse models.
14+
The object detection `Pipeline`, for example, wraps a trained model with the proper pre-processing and post-processing pipelines such as NMS.
15+
This enables the passing of raw images and receiving the bounding boxes from DeepSparse without any extra effort.
16+
With all of this built on top of DeepSparse, the simplicity of `Pipelines` is combined with GPU-class performance on CPUs for sparse models.
1717

18-
## Install Requirements
18+
## Installation Requirements
1919

20-
This example requires [DeepSparse YOLO Install](/get-started/install/deepsparse).
20+
This example requires [DeepSparse YOLO Installation](/get-started/install/deepsparse).
2121

2222
## Model Setup
2323

2424
The object detection `Pipeline` uses Ultralytics YOLOv5 standards and configurations for model setup.
25-
The possible files/variables that can be passed in are the following:
26-
- `model.onnx` - The exported YOLOv5 model in the ONNX format.
27-
- `model.yaml` - The Ultralytics model config file containing configuration information about the model and its post-processing.
25+
The possible files/variables that can be passed in are:
26+
- `model.onnx` - Exported YOLOv5 model in the ONNX format.
27+
- `model.yaml` - Ultralytics model configuration file containing configuration information about the model and its post-processing.
2828
- `class_names` - A list, dictionary, or file containing the index to class name mappings for the trained model.
2929

3030
`model.onnx` is the only required file.
31-
The pipeline will default to a standard setup for the COCO dataset if the model config file or class names are not provided.
31+
The pipeline will default to a standard setup for the COCO dataset if the model configuration file or class names are not provided.
3232

3333
There are two options for passing these files to DeepSparse:
3434

3535
<details>
36-
<summary><b>1) Using The SparseZoo</b></summary>
36+
<summary><b>1) Using the SparseZoo</b></summary>
3737

38-
This pathway is relevant if you want to use a pre-sparsified state-of-the-art model off the shelpf.
38+
This pathway is relevant if you want to use a pre-sparsified state-of-the-art model off the shelf.
3939

4040
SparseZoo is a repository of pre-trained and pre-sparsified models. DeepSparse supports SparseZoo stubs as inputs for automatic download and inclusion into easy testing and deployment.
4141
These models include dense and sparsified versions of YOLOv5 trained on the COCO dataset for performant and general detection, among others.
@@ -54,12 +54,12 @@ These SparseZoo stubs can be passed as arguments to the `Pipeline` constructor i
5454
</details>
5555

5656
<details>
57-
<summary><b>2) Using a Custom Local Model</b></summary>
57+
<summary><b>2) Using a custom local model</b></summary>
5858

5959
This pathway is relevant if you want to use a model fine-tuned on your data with SparseML or a custom model.
6060

6161
There are three steps to using a local model with `Pipelines`:
62-
1. Create the `model.onnx` file (if you trained with SparseML, use the [ONNX export script](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov5#exporting-the-sparse-model-to-onnx))
62+
1. Create the `model.onnx` file (if you trained with SparseML, use the [ONNX export script](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov5#exporting-the-sparse-model-to-onnx)).
6363
2. Collect the `model.yaml` file and `class_names` listed above.
6464
3. Pass the local paths of the files in place of the SparseZoo stubs.
6565

@@ -69,9 +69,9 @@ The examples below use the SparseZoo stubs. Pass the path to the local model in
6969

7070
## Inference Pipelines
7171

72-
With the object detection model setup, it can then be passed into a DeepSparse `Pipeline` utilizing the `model_path` argument.
72+
With the object detection model set up, the model can be passed into a DeepSparse `Pipeline` utilizing the `model_path` argument.
7373
The SparseZoo stub for the sparse-quantized YOLOv5l model given at the beginning is used in the sample code below.
74-
It will automatically download the necessary files for the model from the SparseZoo and then compile them on your local machine in the DeepSparse engine.
74+
It will automatically download the necessary files for the model from the SparseZoo and then compile them on your local machine in DeepSparse.
7575
Once compiled, the model `Pipeline` is ready for inference with images.
7676

7777
First, a sample image is downloaded that will be run through the example to test the pipeline.
@@ -80,7 +80,7 @@ First, a sample image is downloaded that will be run through the example to test
8080
wget -O basilica.jpg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/src/deepsparse/yolo/sample_images/basilica.jpg
8181
```
8282

83-
Next, instantiate the `Pipeline` and pass the image in using the images argument:
83+
Next, instantiate the `Pipeline` and pass in the image using the images argument:
8484

8585
```python
8686
from deepsparse import Pipeline
@@ -99,7 +99,7 @@ print(inference)
9999

100100
## Benchmarking
101101

102-
The DeepSparse install includes a CLI for convenient performance benchmarking.
102+
The DeepSparse installation includes a CLI for convenient performance benchmarking.
103103
You can pass a SparseZoo stub or a local `model.onnx` file.
104104

105105
### Dense YOLOv5l

0 commit comments

Comments
 (0)