You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RAG stands for "Retrieval Augmented Generation". It describes an AI framework that combines information retrieval with text generation to improve the quality and accuracy of AI-generated content.
12
+
13
+
The basic flow of a RAG system looks like this:
14
+
15
+
1. Retrieval: The system searches a knowledge base, usually using some combination of vector and/or text search.
16
+
2. Augmentation: The retrieved information is then provided as context to a generative AI model to provide additional context for the user's query.
17
+
3. The AI model uses both the retrieved knowledge and its internal understanding to generate a more useful response to the user.
18
+
19
+
The benefits of a RAG system revolve around improved factual accuracy of responses. It also allows a system to understand more up-to-date information, since you can add additional knowledge to the knowledge base much more easily than you could retrain the model.
20
+
21
+
Most importantly, RAG lets you provide reference links to the user, showing the user where the system is getting its information.
A vector database is a specialized database designed to store and query vector representations of data. They are a crucial component of many AI applications. But what exactly are they, and how do they work?
12
+
13
+
Traditional databases store data in tables or objects with defined attributes. However, they struggle to recognize similarities between data points that aren't explicitly defined.
14
+
15
+
Vector databases, on the other hand, are designed to store a large numbers of vectors (arrays of numbers), and provide algorithms to be able to search through those stored vectors. That makes it much easier to identify similarities by comparing the vector locations in N dimensional space. This is typically done using distance metrics like cosine similarity or Euclidean distance.
16
+
17
+
How can we convert complex ideas, like the semantic meaning of a series of words, into a series of of number based vectors? We do so using a process called embedding.
18
+
19
+
### Embeddings
20
+
21
+
Embeddings are vectors generated through an AI model. We can convert collections of "tokens" (word fragments) into a point in N dimensional space.
22
+
23
+
Then for any given vector (like the embedding of a question asked by a user) we can query our vector database to find embedded data that is most similar.
24
+
25
+
For our use case, we want to know which Arm learning path is most relevant to a question a user asks.
26
+
27
+
First, ahead of time, we have to convert the raw data (Arm learning path content) into more consumable "chunks". In our case, small `yaml` files. Then we run those chunks through our LLM model and embed the content into our FAISS vector database.
28
+
29
+
### FAISS
30
+
31
+
FAISS (Facebook AI Similarity Search) is a library developed by Facebook AI Research that is designed to efficiently search for similar vectors in large datasets. FAISS is highly optimized for both memory usage and speed, making it the fastest similarity search algorithm available.
32
+
33
+
One of the key reasons FAISS is so fast is its implementation of efficient Approximate Nearest Neighbor (ANN) search algorithms. ANN algorithms allow FAISS to quickly find vectors that are close to a given query vector without having to compare it to every single vector in the database. This significantly reduces the search time, especially in large datasets.
34
+
35
+
Additionally, FAISS performs all searches in-memory, which means that it can leverage the full speed of the system's RAM. This in-memory search capability ensures that the search operations are extremely fast, as they avoid the latency associated with disk I/O operations.
36
+
37
+
In our application, we can take the input from the user and embed it using the same model we used for our database. We then use FAISS nearest neighbor search to compare the user input to the nearest vectors in the database. We then look at the original chunk files for those closest vectors. Using the data from those `chunk.yaml` files, we can retrieve the Arm resource(s) most relevant for that user's question.
38
+
39
+
The retrieved resources are then used to augment the context for the LLM, which generates a final response that is both contextually relevant and contains accurate information.
40
+
41
+
### In Memory Deployment
42
+
43
+
To ensure that our application scales efficiently, we will copy the FAISS database into every deployment instance. By deploying a static in-memory vector store in each instance, we eliminate the need for a centralized database, which can become a bottleneck as the number of requests increases.
44
+
45
+
When each instance has its own copy of the FAISS database, it can perform vector searches locally, leveraging the full speed of the system's RAM. This approach ensures that the search operations are extremely fast and reduces the latency associated with network calls to a centralized database.
46
+
47
+
Moreover, this method enhances the reliability and fault tolerance of our application. If one instance fails, others can continue to operate independently without being affected by the failure. This decentralized approach also simplifies the deployment process, as each instance is self-contained and does not rely on external resources for vector searches.
48
+
49
+
By copying the FAISS database into every deployment, we achieve a scalable, high-performance solution that can handle a large number of requests efficiently.
50
+
51
+
## Collecting Data into Chunks
52
+
53
+
Arm has provided a [companion GitHub repo](https://github.com/ArmDeveloperEcosystem/python-rag-extension/) for this Learning Path that serves as a Python-based Copilot RAG Extension example. In this repo, we have provided scripts to convert an Arm learning path into a series of `chunk.yaml` files for use in our RAG application.
Navigate to the `vectorstore` folder in the [python-rag-extension github repo](https://github.com/ArmDeveloperEcosystem/python-rag-extension/) you just cloned.
66
+
67
+
```bash
68
+
cd python-rag-extension/vectorstore
69
+
```
70
+
71
+
It is recommended to use a virtual environment to manage dependencies.
72
+
73
+
Ensure you have `conda` set up in your development environment. If you aren't sure how, you can follow this [Installation Guide](https://docs.anaconda.com/miniconda/install/).
74
+
75
+
To create a new conda environment, use the following command:
76
+
77
+
```sh
78
+
conda create --name vectorstore python=3.11
79
+
```
80
+
81
+
Once set up is complete, activate the new environment:
Replace `<LEARNING_PATH_URL>` with the URL of the learning path you want to process. If no URL is provided, the script will default to a [known learning path URL](https://learn.arm.com/learning-paths/cross-platform/kleidiai-explainer).
102
+
103
+
The script will process the specified learning path and save the chunks as YAML files in a `./chunks/` directory.
104
+
105
+
## Combine Chunks into FAISS index
106
+
107
+
Once you have a `./chunks/` directory full of yaml files, we now need to use FAISS to create our vector database.
108
+
109
+
### OpenAI Key and Endpoint
110
+
111
+
Ensure your local environment has your `AZURE_OPENAI_KEY` and `AZURE_OPENAI_ENDPOINT` set.
112
+
113
+
#### If needed, generate Azure OpenAI keys and deployment
114
+
115
+
1.**Create an OpenAI Resource**:
116
+
- Go to the [Azure Portal](https://portal.azure.com/).
117
+
- Click on "Create a resource".
118
+
- Search for "OpenAI" and select "Azure OpenAI Service".
119
+
- Click "Create".
120
+
121
+
1.**Configure the OpenAI Resource**:
122
+
- Fill in the required details such as Subscription, Resource Group, Region, and Name.
123
+
- Click "Review + create" and then "Create" to deploy the resource.
124
+
125
+
1.**Generate API Key and Endpoint**:
126
+
- Once the resource is created, navigate to the resource page.
127
+
- Under the "Resource Management->Keys and Endpoint" section, you will find the key and endpoint values.
128
+
- Copy these values and set them in your local environment.
Now we need to create a Copilot extension on GitHub to connect to our deployed application.
10
+
11
+
## Create a GitHub app
12
+
13
+
> For the most up to date instructions, follow the [official documentation for creating a GitHub App for Copilot Extension](https://docs.github.com/en/copilot/building-copilot-extensions/creating-a-copilot-extension/creating-a-github-app-for-your-copilot-extension#creating-a-github-app).
14
+
15
+
On any page of [GitHub](https://github.com/), click your profile picture and go to Settings. Scroll down to developer settings, and go to [create a GitHub App](https://github.com/settings/apps).
Scroll to the bottom and click "Create GitHub App"
30
+
31
+
## Get Client ID and Secret
32
+
33
+
After you create your app, open it up. You will see listed your Client ID under General -> About.
34
+
35
+

36
+
37
+
Under that is **Client Secrets**, click "Generate a new client secret" and save the value. Make sure you copy it before it goes away, you will need it for the next step as part of the flask application.
38
+
39
+
## Install Application
40
+
41
+
Click **Install App** in the sidebar, then install your app onto your account.
0 commit comments