You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/admin/installation.md
+22-5Lines changed: 22 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,14 +18,15 @@ This guide covers installing NeMo Curator with support for **all modalities** an
18
18
19
19
### System Requirements
20
20
21
-
For comprehensive system requirements and production deployment specifications, see[Production Deployment Requirements](deployment/requirements.md).
21
+
For comprehensive system requirements and production deployment specifications, refer to[Production Deployment Requirements](deployment/requirements.md).
22
22
23
23
**Quick Start Requirements:**
24
24
25
25
-**OS**: Ubuntu 24.04/22.04/20.04 (recommended)
26
26
-**Python**: 3.10, 3.11, or 3.12
27
27
-**Memory**: 16GB+ RAM for basic text processing
28
28
-**GPU** (optional): NVIDIA GPU with 16GB+ VRAM for acceleration
29
+
-**CUDA 12** (required for `audio_cuda12`, `video_cuda12`, `image_cuda12`, and `text_cuda12` extras)
29
30
30
31
### Development vs Production
31
32
@@ -41,9 +42,13 @@ For comprehensive system requirements and production deployment specifications,
41
42
42
43
Choose one of the following installation methods based on your needs:
43
44
45
+
:::{tip}
46
+
**Docker is the recommended installation method** for video and audio workflows. The NeMo Curator container includes FFmpeg (with NVENC support) pre-configured, avoiding manual dependency setup. Refer to the [Container Installation](#container-installation) tab below.
47
+
:::
48
+
44
49
::::{tab-set}
45
50
46
-
:::{tab-item} PyPI Installation (Recommended)
51
+
:::{tab-item} PyPI Installation
47
52
48
53
Install NeMo Curator from the Python Package Index using `uv` for proper dependency resolution.
:::{tab-item} Container Installation (Recommended for Video/Audio)
93
98
94
-
NeMo Curator is available as a standalone container on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo-curator. The container includes NeMo Curator with all dependencies pre-installed. You can run it with:
99
+
NeMo Curator is available as a standalone container on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo-curator. The container includes NeMo Curator with all dependencies pre-installed, including FFmpeg with NVENC support.
docker run --gpus all -it --rm nvcr.io/nvidia/nemo-curator:{{ container_version }}
102
107
```
103
108
109
+
```{important}
110
+
After entering the container, activate the virtual environment before running any NeMo Curator commands:
111
+
112
+
source /opt/venv/env.sh
113
+
114
+
The container uses a virtual environment at `/opt/venv`. If you see `No module named nemo_curator`, the environment has not been activated.
115
+
```
116
+
104
117
Alternatively, you can build the NeMo Curator container locally using the provided Dockerfile:
105
118
106
119
```bash
@@ -115,7 +128,7 @@ docker run --gpus all -it --rm nemo-curator:latest
115
128
116
129
**Benefits:**
117
130
118
-
- Pre-configured environment with all dependencies
131
+
- Pre-configured environment with all dependencies (FFmpeg, CUDA libraries)
119
132
- Consistent runtime across different systems
120
133
- Ideal for production deployments
121
134
@@ -157,6 +170,10 @@ If encoders are missing, reinstall `FFmpeg` with the required options or use the
157
170
:::
158
171
::::
159
172
173
+
```{note}
174
+
**FFmpeg build requires CUDA toolkit (nvcc):** If you encounter `ERROR: failed checking for nvcc` during FFmpeg installation, ensure that the CUDA toolkit is installed and `nvcc` is available on your `PATH`. You can verify with `nvcc --version`. If using the NeMo Curator container, FFmpeg is pre-installed with NVENC support.
Copy file name to clipboardExpand all lines: docs/curate-text/process-data/deduplication/fuzzy.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,10 @@ Ideal for detecting documents with minor differences such as formatting changes,
34
34
- Ray cluster with GPU support (required for distributed processing)
35
35
- Stable document identifiers for removal (either existing IDs or IDs generated by the workflow and removal stages)
36
36
37
+
```{note}
38
+
**Running in Docker**: When running fuzzy deduplication inside the NeMo Curator container, ensure the container is started with `--gpus all` so that Ray workers can access the GPU. Without GPU access, you may see `CUDARuntimeError` or `AttributeError: 'CUDARuntimeError' object has no attribute 'msg'`. Also activate the virtual environment with `source /opt/venv/env.sh` after entering the container.
39
+
```
40
+
37
41
## Quick Start
38
42
39
43
Get started with fuzzy deduplication using the following example of identifying duplicates, then remove them:
Copy file name to clipboardExpand all lines: docs/curate-text/process-data/deduplication/semdedup.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,10 @@ Based on [SemDeDup: Data-efficient learning at web-scale through semantic dedupl
42
42
- GPU acceleration (required for embedding generation and clustering)
43
43
- Stable document identifiers for removal (either existing IDs or IDs managed by the workflow and removal stages)
44
44
45
+
```{note}
46
+
**Running in Docker**: When running semantic deduplication inside the NeMo Curator container, ensure the container is started with `--gpus all` so that CUDA GPUs are available. Without this flag, you will see `RuntimeError: No CUDA GPUs are available`. Also activate the virtual environment with `source /opt/venv/env.sh` after entering the container.
47
+
```
48
+
45
49
## Quick Start
46
50
47
51
Get started with semantic deduplication using the following example of identifying duplicates, then remove them in one step:
Copy file name to clipboardExpand all lines: docs/curate-text/process-data/quality-assessment/distributed-classifier.md
+25-11Lines changed: 25 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ The distributed data classification in NeMo Curator works by:
20
20
21
21
1.**Parallel Processing**: Chunking datasets across multiple computing nodes and GPUs to accelerate classification
22
22
2.**Pre-trained Models**: Using specialized models for different classification tasks
23
-
3.**Batched Inference**: Optimizing throughput with intelligent batching via CrossFit integration
23
+
3.**Batched Inference**: Optimizing throughput with intelligent batching
24
24
4.**Consistent API**: Providing a unified interface through the `DistributedDataClassifier` base class
25
25
26
26
The `DistributedDataClassifier` is designed to run on GPU clusters with minimal code changes regardless of which specific classifier you're using. All classifiers support filtering based on classification results and storing prediction scores as metadata.
@@ -29,6 +29,16 @@ The `DistributedDataClassifier` is designed to run on GPU clusters with minimal
29
29
Distributed classification requires GPU acceleration and is not supported for CPU-only processing. As long as GPU resources are available and NeMo Curator is correctly installed, GPU acceleration is handled automatically.
30
30
:::
31
31
32
+
```{tip}
33
+
**Running the tutorial notebooks**: The classification tutorial notebooks require the `text_cuda12` or `all` installation extra to include all relevant dependencies. If you encounter `ModuleNotFoundError`, reinstall with the appropriate extra:
34
+
35
+
uv pip install "nemo-curator[text_cuda12]"
36
+
37
+
When using classifiers that download from Hugging Face (such as Aegis and InstructionDataGuard), set your `HF_TOKEN` environment variable to avoid rate limiting:
38
+
39
+
export HF_TOKEN="your_token_here"
40
+
```
41
+
32
42
---
33
43
34
44
## Usage
@@ -39,16 +49,16 @@ NVIDIA NeMo Curator provides a base class `DistributedDataClassifier` that can b
| DomainClassifier |Categorize English text by domain |[nvidia/domain-classifier](https://huggingface.co/nvidia/domain-classifier)|`filter_by`, `text_field`| None |
43
-
| MultilingualDomainClassifier |Categorize text in 52 languages by domain|[nvidia/multilingual-domain-classifier](https://huggingface.co/nvidia/multilingual-domain-classifier)|`filter_by`, `text_field`| None |
| ContentTypeClassifier |Categorize by speech type|[nvidia/content-type-classifier-deberta](https://huggingface.co/nvidia/content-type-classifier-deberta)|`filter_by`, `text_field`| None |
51
-
| PromptTaskComplexityClassifier |Classify prompt tasks and complexity |[nvidia/prompt-task-and-complexity-classifier](https://huggingface.co/nvidia/prompt-task-and-complexity-classifier)|`text_field`| None |
52
+
| DomainClassifier |Assigns one of 26 domain labels (such as "Sports," "Science," "News") to English text|[nvidia/domain-classifier](https://huggingface.co/nvidia/domain-classifier)|`filter_by`, `text_field`| None |
53
+
| MultilingualDomainClassifier |Assigns domain labels to text in 52 languages; same labels as DomainClassifier|[nvidia/multilingual-domain-classifier](https://huggingface.co/nvidia/multilingual-domain-classifier)|`filter_by`, `text_field`| None |
54
+
| QualityClassifier |Rates document quality as "Low," "Medium," or "High" using a DeBERTa model|[nvidia/quality-classifier-deberta](https://huggingface.co/nvidia/quality-classifier-deberta)|`filter_by`, `text_field`| None |
55
+
| AegisClassifier |Detects unsafe content across 13 risk categories (violence, hate speech, and others) using LlamaGuard|[nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0](https://huggingface.co/nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0)|`aegis_variant`, `filter_by`| HuggingFace token |
| FineWebEduClassifier |Scores educational value from 0 to 5 (0=spam, 5=scholarly) for training data selection|[HuggingFaceFW/fineweb-edu-classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier)|`label_field`, `int_field`| None |
58
+
| FineWebMixtralEduClassifier |Scores educational value from 0 to 5 using Mixtral 8x22B annotation data|[nvidia/nemocurator-fineweb-mixtral-edu-classifier](https://huggingface.co/nvidia/nemocurator-fineweb-mixtral-edu-classifier)|`label_field`, `int_field`, `model_inference_batch_size=1024`| None |
59
+
| FineWebNemotronEduClassifier |Scores educational value from 0 to 5 using Nemotron-4-340B annotation data|[nvidia/nemocurator-fineweb-nemotron-4-edu-classifier](https://huggingface.co/nvidia/nemocurator-fineweb-nemotron-4-edu-classifier)|`label_field`, `int_field`, `model_inference_batch_size=1024`| None |
60
+
| ContentTypeClassifier |Categorizes text into 11 speech types (such as "Blogs," "News," "Academic")|[nvidia/content-type-classifier-deberta](https://huggingface.co/nvidia/content-type-classifier-deberta)|`filter_by`, `text_field`| None |
61
+
| PromptTaskComplexityClassifier |Labels prompts by task type (such as QA and summarization) and complexity dimensions|[nvidia/prompt-task-and-complexity-classifier](https://huggingface.co/nvidia/prompt-task-and-complexity-classifier)|`text_field`| None |
52
62
53
63
### Domain Classifier
54
64
@@ -365,6 +375,10 @@ pipeline.add_stage(writer)
365
375
results = pipeline.run() # Uses XennaExecutor by default
366
376
```
367
377
378
+
## Custom Model Integration
379
+
380
+
You can integrate your own classification models by extending `DistributedDataClassifier`. Refer to the [Text Classifiers README](https://github.com/NVIDIA-NeMo/Curator/tree/main/nemo_curator/stages/text/classifiers#text-classifiers) for implementation details and examples.
381
+
368
382
## Performance Optimization
369
383
370
384
NVIDIA NeMo Curator's distributed classifiers are optimized for high-throughput processing through several key features:
Copy file name to clipboardExpand all lines: docs/curate-video/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ Understand how components work together so you can plan, scale, and troubleshoot
33
33
```
34
34
35
35
```{note}
36
-
Video pipelines use the `XennaExecutor` backend by default, which provides optimized support for GPU-accelerated video processing including hardware decoders (`nvdecs`) and encoders (`nvencs`). You do not need to import or configure the executor unless you want to use an alternative backend. For more information about customizing backends, refer to [Add a Custom Stage](video-tutorials-pipeline-cust-add-stage).
36
+
Video pipelines use the `XennaExecutor` backend by default, which provides optimized support for GPU-accelerated video processing including hardware decoders and encoders. You do not need to import or configure the executor unless you want to use an alternative backend. For more information about customizing backends, refer to [Pipeline Execution Backends](reference-execution-backends).
Copy file name to clipboardExpand all lines: docs/curate-video/load-data/index.md
+14-23Lines changed: 14 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,11 +18,20 @@ Load video data for curation using NeMo Curator.
18
18
19
19
NeMo Curator loads videos with a composite stage that discovers files and extracts metadata:
20
20
21
-
1.`VideoReader` decomposes into a partitioning stage plus a reader stage.
22
-
2. Local paths use `FilePartitioningStage` to list files; remote URLs (for example, `s3://`, `gcs://`, `http(s)://`) use `ClientPartitioningStage` backed by `fsspec`.
23
-
3. For remote datasets, you can optionally supply an explicit file list using `ClientPartitioningStage.input_list_json_path`.
24
-
4.`VideoReaderStage` downloads bytes (local or via `FSPath`) and calls `video.populate_metadata()` to extract resolution, fps, duration, encoding format, and other fields.
25
-
5. Set `video_limit` to cap discovery; use `None` for unlimited. Set `verbose=True` to log detailed per-video information.
21
+
`VideoReader` is a composite stage that is broken down into a
22
+
1. Partitioning (list files) stage
23
+
- Local paths use `FilePartitioningStage` to list files
24
+
- Remote URLs (for example, `s3://`, `gcs://`)
25
+
- use `ClientPartitioningStage` backed by `fsspec`.
26
+
- Optional `input_list_json_path` allows explicit file lists under a root prefix.
27
+
28
+
2. Reader stage (`VideoReaderStage`)
29
+
- This stage downloads the bytes (local or via `FSPath`) for each listed file
30
+
- Calls `video.populate_metadata()` to extract resolution, fps, duration, encoding format, and other fields.
31
+
32
+
You can set
33
+
-`video_limit` to limit the number of files to be processed; use `None` for unlimited.
34
+
-`verbose=True` to log detailed per-video information.
26
35
27
36
---
28
37
@@ -32,24 +41,6 @@ NeMo Curator loads videos with a composite stage that discovers files and extrac
32
41
33
42
Use `VideoReader` to load videos from local paths or remote URLs.
34
43
35
-
### Local Paths
36
-
37
-
- Examples: `/data/videos/`, `/mnt/datasets/av/`
38
-
- Uses `FilePartitioningStage` to recursively discover files.
39
-
- Filters by extensions: `.mp4`, `.mov`, `.avi`, `.mkv`, `.webm`.
40
-
- Set `video_limit` to cap discovery during testing (`None` means unlimited).
41
-
42
-
### Remote Paths
43
-
44
-
- Examples: `s3://bucket/path/`, `gcs://bucket/path/`, `https://host/path/`, and other fsspec-supported protocols such as `s3a://` and `abfs://`.
45
-
- Uses `ClientPartitioningStage` backed by `fsspec` to list files.
46
-
- Optional `input_list_json_path` allows explicit file lists under a root prefix.
47
-
- Wraps entries as `FSPath` for efficient byte access during reading.
48
-
49
-
```{tip}
50
-
Use an object storage prefix (for example, `s3://my-bucket/videos/`) to stream from cloud storage. Configure credentials in your environment or client configuration.
0 commit comments