You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_rag/1_rag.md
+7-8Lines changed: 7 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,25 +8,24 @@ layout: learningpathall
8
8
9
9
## Before you start
10
10
11
-
Complete the [Unlock quantized LLM performance on Arm-based NVIDIA DGX Spark](/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/)Learning Path first to understand how to build and run llama.cpp on both the CPU and GPU. This foundational knowledge is essential before you begin building the RAG solution described here.
11
+
Before starting this Learning Path, you should complete [Unlock quantized LLM performance on Arm-based NVIDIA DGX Spark](/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/) to learn about the CPU and GPU builds of llama.cpp. This background is recommended for building the RAG solution on llama.cpp.
12
12
13
-
{{% notice Note %}}
14
-
The NVIDIA DGX Spark is also called the Grace–Blackwell platform or GB10, which refers to the NVIDIA Grace–Blackwell Superchip.
15
-
{{% /notice %}}
13
+
The NVIDIA DGX Spark is also referred to as the Grace-Blackwell platform or GB10, the name of the NVIDIA Grace-Blackwell Superchip.
16
14
17
15
## What is RAG?
18
16
19
-
Retrieval-Augmented Generation (RAG) combines information retrieval with language-model generation. Instead of relying solely on pre-trained weights, a RAG system retrieves relevant text from a document corpus and passes it to a language model to create factual, context-aware responses.
17
+
Retrieval-Augmented Generation (RAG) combines information retrieval with language-model generation.
18
+
Instead of relying solely on pre-trained weights, a RAG system retrieves relevant text from a document corpus and passes it to a language model to create factual, context-aware responses.
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/dgx_spark_rag/_index.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
-
title: Build a RAG pipeline on NVIDIA DGX Spark
2
+
title: Build a RAG pipeline on Arm-based NVIDIA DGX Spark
3
3
minutes_to_complete: 60
4
4
5
5
who_is_this_for: This is an advanced topic for developers who want to understand and implement a Retrieval-Augmented Generation (RAG) pipeline on the NVIDIA DGX Spark platform. It is ideal for those interested in exploring how Arm-based Grace CPUs manage local document retrieval and orchestration, while Blackwell GPUs accelerate large language model inference through the open-source llama.cpp REST server.
6
6
7
7
learning_objectives:
8
-
- Describe how a RAG system combines document retrieval and language model generation.
9
-
- Deploy a hybrid CPU–GPU RAG pipeline on the GB10 platform using open-source tools.
10
-
- Use the llama.cpp REST Server for GPU-accelerated inference with CPU-managed retrieval.
11
-
- Build a reproducible RAG application that demonstrates efficient hybrid computing.
8
+
- Describe how a RAG system combines document retrieval and language model generation
9
+
- Deploy a hybrid CPU-GPU RAG pipeline on the GB10 platform using open-source tools
10
+
- Use the llama.cpp REST Server for GPU-accelerated inference with CPU-managed retrieval
11
+
- Build a reproducible RAG application that demonstrates efficient hybrid computing
12
12
13
13
prerequisites:
14
14
- An NVIDIA DGX Spark system with at least 15 GB of available disk space.
0 commit comments