You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_rag/1_rag.md
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,24 +8,25 @@ layout: learningpathall
8
8
9
9
## Before you start
10
10
11
-
Before starting this Learning Path, you should complete [Unlock quantized LLM performance on Arm-based NVIDIA DGX Spark](/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/)to learn about the CPU and GPU builds of llama.cpp. This background is recommended for building the RAG solution on llama.cpp.
11
+
Complete the [Unlock quantized LLM performance on Arm-based NVIDIA DGX Spark](/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/)Learning Path first to understand how to build and run llama.cpp on both the CPU and GPU. This foundational knowledge is essential before you begin building the RAG solution described here.
12
12
13
-
The NVIDIA DGX Spark is also referred to as the Grace-Blackwell platform or GB10, the name of the NVIDIA Grace-Blackwell Superchip.
13
+
{{% notice Note %}}
14
+
The NVIDIA DGX Spark is also called the Grace–Blackwell platform or GB10, which refers to the NVIDIA Grace–Blackwell Superchip.
15
+
{{% /notice %}}
14
16
15
17
## What is RAG?
16
18
17
-
Retrieval-Augmented Generation (RAG) combines information retrieval with language-model generation.
18
-
Instead of relying solely on pre-trained weights, a RAG system retrieves relevant text from a document corpus and passes it to a language model to create factual, context-aware responses.
19
+
Retrieval-Augmented Generation (RAG) combines information retrieval with language-model generation. Instead of relying solely on pre-trained weights, a RAG system retrieves relevant text from a document corpus and passes it to a language model to create factual, context-aware responses.
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_rag/4_rag_memory_observation.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,10 @@ layout: "learningpathall"
6
6
7
7
## Observe unified memory performance
8
8
9
-
In this section, you will observe how the Grace CPU and Blackwell GPU share data through unified memory during RAG execution.
9
+
In this section, you will learn how to monitor unified memory performance and GPU utilization on Grace–Blackwell systems during Retrieval-Augmented Generation (RAG) AI workloads. By observing real-time system memory and GPU activity, you will verify zero-copy data sharing and efficient hybrid AI inference enabled by the Grace–Blackwell unified memory architecture.
10
+
11
+
12
+
You will start from an idle system state, then progressively launch the RAG model server and run a query, while monitoring both system memory and GPU activity from separate terminals. This hands-on experiment demonstrates how unified memory enables both the Grace CPU and Blackwell GPU to access the same memory space without data movement, optimizing AI inference performance.
10
13
11
14
You will start from an idle system state, then progressively launch the model server and run a query, while monitoring both system memory and GPU activity from separate terminals.
12
15
@@ -21,7 +24,8 @@ Open two terminals on your GB10 system and use them as listed in the table below
21
24
22
25
You should also have your original terminals open that you used to run the `llama-server` and the RAG queries in the previous section. You will run these again and use the two new terminals for observation.
23
26
24
-
### Prepare for the experiments
27
+
28
+
### Prepare for Unified Memory Observation Experiments
25
29
26
30
Ensure the RAG pipeline is stopped before starting the observation.
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_rag/_index.md
+1-6Lines changed: 1 addition & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,11 @@
1
1
---
2
2
title: Build a RAG pipeline on NVIDIA DGX Spark
3
-
4
-
draft: true
5
-
cascade:
6
-
draft: true
7
-
8
3
minutes_to_complete: 60
9
4
10
5
who_is_this_for: This is an advanced topic for developers who want to understand and implement a Retrieval-Augmented Generation (RAG) pipeline on the NVIDIA DGX Spark platform. It is ideal for those interested in exploring how Arm-based Grace CPUs manage local document retrieval and orchestration, while Blackwell GPUs accelerate large language model inference through the open-source llama.cpp REST server.
11
6
12
7
learning_objectives:
13
-
- Understand how a RAG system combines document retrieval and language model generation.
8
+
- Describe how a RAG system combines document retrieval and language model generation.
14
9
- Deploy a hybrid CPU–GPU RAG pipeline on the GB10 platform using open-source tools.
15
10
- Use the llama.cpp REST Server for GPU-accelerated inference with CPU-managed retrieval.
16
11
- Build a reproducible RAG application that demonstrates efficient hybrid computing.
0 commit comments