You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/milvus-rag/_index.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,21 +1,17 @@
1
1
---
2
-
title: Build a Retrieval-Augmented Generation (RAG) application using Zilliz Cloud on Arm servers
3
-
4
-
draft: true
5
-
cascade:
6
-
draft: true
2
+
title: Build a RAG application using Zilliz Cloud on Arm servers
7
3
8
4
minutes_to_complete: 20
9
5
10
-
who_is_this_for: This is an introductory topic for software developers who want to create a RAG application on Arm servers.
6
+
who_is_this_for: This is an introductory topic for software developers who want to create a Retrieval-Augmented Generation (RAG) application on Arm servers.
11
7
12
8
learning_objectives:
13
-
- Create a simple RAG application using Zilliz Cloud
14
-
- Launch a LLM service on Arm servers
9
+
- Create a simple RAG application using Zilliz Cloud.
10
+
- Launch an LLM service on Arm servers.
15
11
16
12
prerequisites:
17
-
- Basic understanding of a RAG pipeline.
18
-
- An AWS Graviton3 c7g.2xlarge instance, or any [Armbased instance](/learning-paths/servers-and-cloud-computing/csp) from a cloud service provider or an on-premise Arm server.
13
+
- A basic understanding of a RAG pipeline.
14
+
- An AWS Graviton3 C7g.2xlarge instance, or any [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp) from a cloud service provider or an on-premise Arm server.
19
15
- A [Zilliz account](https://zilliz.com/cloud), which you can sign up for with a free trial.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/milvus-rag/launch_llm_service.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,23 +1,23 @@
1
1
---
2
-
title: Launch LLM Server
2
+
title: Launch the LLM Server
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
In this section, you will build and run the `llama.cpp` server program using an OpenAI-compatible API on your running AWS Arm-based server instance.
9
+
### Llama 3.1 Model and Llama.cpp
10
10
11
-
### Llama 3.1 model & llama.cpp
11
+
In this section, you will build and run the `llama.cpp` server program using an OpenAI-compatible API on your AWS Arm-based server instance.
12
12
13
13
The [Llama-3.1-8B model](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf) from Meta belongs to the Llama 3.1 model family and is free to use for research and commercial purposes. Before you use the model, visit the Llama [website](https://llama.meta.com/llama-downloads/) and fill in the form to request access.
14
14
15
-
[llama.cpp](https://github.com/ggerganov/llama.cpp) is an opensource C/C++ project that enables efficient LLM inference on a variety of hardware - both locally, and in the cloud. You can conveniently host a Llama 3.1 model using `llama.cpp`.
15
+
[Llama.cpp](https://github.com/ggerganov/llama.cpp) is an open-source C/C++ project that enables efficient LLM inference on a variety of hardware - both locally, and in the cloud. You can conveniently host a Llama 3.1 model using `llama.cpp`.
16
16
17
17
18
-
### Download and build llama.cpp
18
+
### Download and build Llama.cpp
19
19
20
-
Run the following commands to install make, cmake, gcc, g++, and other essential tools required for building llama.cpp from source:
20
+
Run the following commands to install make, cmake, gcc, g++, and other essential tools required for building Llama.cpp from source:
By default, `llama.cpp` builds for CPU only on Linux and Windows. You don't need to provide any extra switches to build it for the Arm CPU that you run it on.
36
+
By default, `llama.cpp` builds for CPU only on Linux and Windows. You do not need to provide any extra switches to build it for the Arm CPU that you run it on.
37
37
38
38
Run `make` to build it:
39
39
@@ -64,23 +64,23 @@ You can now download the model using the huggingface cli:
The GGUF model format, introduced by the llama.cpp team, uses compression and quantization to reduce weight precision to 4-bit integers, significantly decreasing computational and memory demands and making Arm CPUs effective for LLM inference.
67
+
The GGUF model format, introduced by the Llama.cpp team, uses compression and quantization to reduce weight precision to 4-bit integers, significantly decreasing computational and memory demands and making Arm CPUs effective for LLM inference.
This will output a new file, `dolphin-2.9.4-llama3.1-8b-Q4_0_8_8.gguf`, which contains reconfigured weights that allow `llama-cli` to use SVE 256 and MATMUL_INT8 support.
78
+
This outputs a new file, `dolphin-2.9.4-llama3.1-8b-Q4_0_8_8.gguf`, which contains reconfigured weights that allow `llama-cli` to use SVE 256 and MATMUL_INT8 support.
79
79
80
80
This requantization is optimal specifically for Graviton3. For Graviton2, the optimal requantization should be performed in the `Q4_0_4_4` format, and for Graviton4, the `Q4_0_4_8` format is the most suitable for requantization.
81
81
82
82
### Start the LLM Server
83
-
You can utilize the `llama.cpp` server program and send requests via an OpenAI-compatible API. This allows you to develop applications that interact with the LLM multiple times without having to repeatedly start and stop it. Additionally, you can access the server from another machine where the LLM is hosted over the network.
83
+
You can utilize the `llama.cpp` server program and send requests through an OpenAI-compatible API. This allows you to develop applications that interact with the LLM multiple times without having to repeatedly start and stop it. Additionally, you can access the server from another machine where the LLM is hosted over the network.
84
84
85
85
Start the server from the command line, and it listens on port 8080:
86
86
@@ -91,10 +91,10 @@ Start the server from the command line, and it listens on port 8080:
91
91
The output from this command should look like:
92
92
93
93
```output
94
-
'main: server is listening on 127.0.0.1:8080 - starting the main loop
94
+
main: server is listening on 127.0.0.1:8080 - starting the main loop
95
95
```
96
96
97
-
You can also adjust the parameters of the launched LLM to adapt it to your server hardware to obtain ideal performance. For more parameter information, see the `llama-server --help` command.
97
+
You can also adjust the parameters of the launched LLM to adapt it to your server hardware to achieve an ideal performance. For more parameter information, see the `llama-server --help` command.
98
98
99
99
You have started the LLM service on your AWS Graviton instance with an Arm-based CPU. In the next section, you will directly interact with the service using the OpenAI SDK.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/milvus-rag/offline_data_loading.md
+21-20Lines changed: 21 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,30 +5,31 @@ weight: 3
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
+
## Create a dedicated cluster
8
9
9
-
In this section, you will learn how to setup a cluster on Zilliz Cloud. You will then learn how to load your private knowledge database into the cluster.
10
+
In this section, you will set up a cluster on Zilliz Cloud.
10
11
11
-
### Create a dedicated cluster
12
+
Begin by [registering](https://docs.zilliz.com/docs/register-with-zilliz-cloud) for a free account on Zilliz Cloud.
12
13
13
-
You will need to [register](https://docs.zilliz.com/docs/register-with-zilliz-cloud) for a free account on Zilliz Cloud.
14
+
After you register, [create a cluster](https://docs.zilliz.com/docs/create-cluster).
14
15
15
-
After you register, [create a cluster](https://docs.zilliz.com/docs/create-cluster) on Zilliz Cloud. In this Learning Path, you will create a dedicated cluster deployed in AWS using Arm-based machines to store and retreive the vector data as shown:
16
+
Now create a **Dedicated**cluster deployed in AWS using Arm-based machines to store and retrieve the vector data as shown:
16
17
17
18

18
19
19
-
When you select the `Create Cluster` Button, you should see the cluster running in your Default Project.
20
+
When you select the **Create Cluster** Button, you should see the cluster running in your **Default Project**.
20
21
21
22

22
23
23
24
{{% notice Note %}}
24
-
You can use self-hosted Milvus as an alternative to Zilliz Cloud. This option is more complicated to set up. We can also deploy [Milvus Standalone](https://milvus.io/docs/install_standalone-docker-compose.md) and [Kubernetes](https://milvus.io/docs/install_cluster-milvusoperator.md) on Arm-based machines. For more information about Milvus installation, please refer to the [installation documentation](https://milvus.io/docs/install-overview.md).
25
+
You can use self-hosted Milvus as an alternative to Zilliz Cloud. This option is more complicated to set up. You can also deploy [Milvus Standalone](https://milvus.io/docs/install_standalone-docker-compose.md) and [Kubernetes](https://milvus.io/docs/install_cluster-milvusoperator.md) on Arm-based machines. For more information about installing Milvus, see the [Milvus installation documentation](https://milvus.io/docs/install-overview.md).
25
26
{{% /notice %}}
26
27
27
-
###Create the Collection
28
+
## Create the Collection
28
29
29
-
With the dedicated cluster running in Zilliz Cloud, you are now ready to create a collection in your cluster.
30
+
With the Dedicated cluster running in Zilliz Cloud, you are now ready to create a collection in your cluster.
30
31
31
-
Within your activated python `venv`, start by creating a file named `zilliz-llm-rag.py` and copy the contents below into it:
32
+
Within your activated Python virtual environment `venv`, start by creating a file named `zilliz-llm-rag.py`, and copy the contents below into it:
32
33
33
34
```python
34
35
from pymilvus import MilvusClient
@@ -38,7 +39,7 @@ milvus_client = MilvusClient(
38
39
)
39
40
40
41
```
41
-
Replace <your_zilliz_public_endpoint> and <yourzilliz_api_key> with the `URI` and `Token` for your running cluster. Refer to [Public Endpoint and Api key](https://docs.zilliz.com/docs/on-zilliz-cloud-console#free-cluster-details) in Zilliz Cloud for more details.
42
+
Replace *<your_zilliz_public_endpoint>* and *<yourzilliz_api_key>* with the `URI` and `Token` for your running cluster. Refer to [Public Endpoint and Api key](https://docs.zilliz.com/docs/on-zilliz-cloud-console#free-cluster-details) in Zilliz Cloud for further information.
42
43
43
44
Now, append the following code to `zilliz-llm-rag.py` and save the contents:
This code checks if a collection already exists and drops it if it does. You then, create a new collection with the specified parameters.
60
+
This code checks if a collection already exists and drops it if it does. If this happens, you can create a new collection with the specified parameters.
60
61
61
-
If you don't specify any field information, Milvus will automatically create a default `id` field for primary key, and a `vector` field to store the vector data. A reserved JSON field is used to store non-schema-defined fields and their values.
62
-
You will use inner product distance as the default metric type. For more information about distance types, you can refer to [Similarity Metrics page](https://milvus.io/docs/metric.md?tab=floating)
62
+
If you do not specify any field information, Milvus automatically creates a default `id` field for the primary key, and a `vector` field to store the vector data. A reserved JSON field is used to store non-schemadefined fields and their values.
63
+
You can use inner product distance as the default metric type. For more information about distance types, you can refer to [Similarity Metrics page](https://milvus.io/docs/metric.md?tab=floating).
63
64
64
65
You can now prepare the data to use in this collection.
65
66
66
-
###Prepare the data
67
+
## Prepare the data
67
68
68
-
In this example, you will use the FAQ pages from the [Milvus Documentation 2.4.x](https://github.com/milvus-io/milvus-docs/releases/download/v2.4.6-preview/milvus_docs_2.4.x_en.zip) as the private knowledge that is loaded in your RAG dataset/collection.
69
+
In this example, you will use the FAQ pages from the [Milvus Documentation 2.4.x](https://github.com/milvus-io/milvus-docs/releases/download/v2.4.6-preview/milvus_docs_2.4.x_en.zip) as the private knowledge that is loaded in your RAG dataset.
69
70
70
71
Download the zip file and extract documents to the folder `milvus_docs`.
You will load all the markdown files from the folder `milvus_docs/en/faq` into your data collection. For each document, use "# " to separate the content in the file, which can roughly separate the content of each main part of the markdown file.
78
+
Now load all the markdown files from the folder `milvus_docs/en/faq` into your data collection. For each document, use "# " to separate the content in the file. This divides the content of each main part of the markdown file.
78
79
79
80
Open `zilliz-llm-rag.py` and append the following code to it:
80
81
@@ -91,9 +92,9 @@ for file_path in glob("milvus_docs/en/faq/*.md", recursive=True):
91
92
```
92
93
93
94
### Insert data
94
-
You will now prepare a simple but efficient embedding model [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) that can convert the loaded text into embedding vectors.
95
+
Now you can prepare a simple but efficient embedding model [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) that can convert the loaded text into embedding vectors.
95
96
96
-
You will iterate through the text lines, create embeddings, and then insert the data into Milvus.
97
+
You can iterate through the text lines, create embeddings, and then insert the data into Milvus.
97
98
98
99
Append and save the code shown below into `zilliz-llm-rag.py`:
99
100
@@ -115,10 +116,10 @@ for i, (line, embedding) in enumerate(
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/milvus-rag/online_rag.md
+10-13Lines changed: 10 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,11 @@ weight: 5
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
+
## Prepare the Embedding Model
8
9
9
-
In this section, you will build the online RAG part of your application.
10
+
In your Python script, generate a test embedding and print its dimension and the first few elements.
10
11
11
-
### Prepare the embedding model
12
-
13
-
In your python script, generate a test embedding and print its dimension and first few elements.
14
-
15
-
For the LLM, you will use the OpenAI SDK to request the Llama service launched before. You don't need to use any API key because it is running locally on your machine.
12
+
For the LLM, you will use the OpenAI SDK to request the Llama service that you launched previously. You do not need to use an API key because it is running locally on your machine.
16
13
17
14
Append the code below to `zilliz-llm-rag.py`:
18
15
@@ -31,7 +28,7 @@ Run the script. The output should look like:
31
28
32
29
### Retrieve data for a query
33
30
34
-
You will specify a frequent question about Milvus and then search for the question in the collection and retrieve the semantic top-3 matches.
31
+
Now specify a common question about Milvus, and search for the question in the collection, in order to retrieve the top 3 semantic matches.
35
32
36
33
Append the code shown below to `zilliz-llm-rag.py`:
Run the script again and the output with the top 3 matches will look like:
55
+
Run the script again, and the output with the top 3 matches should look like:
59
56
60
57
```output
61
58
[
@@ -68,18 +65,18 @@ Run the script again and the output with the top 3 matches will look like:
68
65
0.5974207520484924
69
66
],
70
67
[
71
-
"What is the maximum dataset size Milvus can handle?\n\n \nTheoretically, the maximum dataset size Milvus can handle is determined by the hardware it is run on, specifically system memory and storage:\n\n- Milvus loads all specified collections and partitions into memory before running queries. Therefore, memory size determines the maximum amount of data Milvus can query.\n- When new entities and and collection-related schema (currently only MinIO is supported for data persistence) are added to Milvus, system storage determines the maximum allowable size of inserted data.\n\n###",
68
+
"What is the maximum dataset size Milvus can handle?\n\n \nTheoretically, the maximum dataset size Milvus can handle is determined by the hardware it is run on, specifically system memory and storage:\n\n- Milvus loads all specified collections and partitions into memory before running queries. Therefore, memory size determines the maximum amount of data Milvus can query.\n- When new entities and collection-related schema (currently only MinIO is supported for data persistence) are added to Milvus, system storage determines the maximum allowable size of inserted data.\n\n###",
72
69
0.5833579301834106
73
70
]
74
71
]
75
72
```
76
-
### Use LLM to get a RAG response
73
+
### Use the LLM to obtain a RAG response
77
74
78
75
You are now ready to use the LLM and obtain a RAG response.
79
76
80
-
For the LLM, you will use the OpenAI SDK to request the Llama service you launched in the previous section. You don't need to use any API key because it is running locally on your machine.
77
+
For the LLM, you will use the OpenAI SDK to request the Llama service you launched in the previous section. You do not need to use an API key because it is running locally on your machine.
81
78
82
-
You will then convert the retrieved documents into a string format. Define system and user prompts for the Language Model. This prompt is assembled with the retrieved documents from Milvus. Finally use the LLM to generate a response based on the prompts.
79
+
You will then convert the retrieved documents into a string format. Define system and user prompts for the Language Model. This prompt is assembled with the retrieved documents from Milvus. Finally, use the LLM to generate a response based on the prompts.
0 commit comments