You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"<a href=\"https://colab.research.google.com/github/datastax/ragstack-ai/blob/main/quickstart.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
8
+
]
9
+
},
10
+
{
11
+
"cell_type": "markdown",
12
+
"metadata": {},
13
+
"source": [
14
+
"# Quickstart with RAGStack\n",
15
+
"\n",
16
+
"This notebook demonstrates how to set up a simple RAG pipeline with RAGStack. At the end of this notebook, you will have a fully functioning Question/Answer model that can answer questions using your supplied documents. \n",
17
+
"\n",
18
+
"A RAG pipeline requires, at minimum, a vector store, an embedding model, and an LLM. In this tutorial, you will use an Astra DB vector store, an OpenAI embedding model, an OpenAI LLM, and LangChain to orchestrate it all together."
19
+
]
20
+
},
21
+
{
22
+
"cell_type": "markdown",
23
+
"metadata": {},
24
+
"source": [
25
+
"## Prerequisites\n",
26
+
"\n",
27
+
"You will need a vector-enabled Astra database and an OpenAI Account.\n",
28
+
"\n",
29
+
"* Create an [Astra vector database](https://docs.datastax.com/en/astra-serverless/docs/getting-started/create-db-choices.html).\n",
30
+
"* Create an [OpenAI account](https://openai.com/)\n",
31
+
"* Within your database, create an [Astra DB Access Token](https://docs.datastax.com/en/astra-serverless/docs/manage/org/manage-tokens.html) with Database Administrator permissions.\n",
"chain.invoke(\"In the given context, what subject are philosophers most concerned with?\")"
267
+
]
268
+
},
269
+
{
270
+
"cell_type": "code",
271
+
"execution_count": null,
272
+
"metadata": {},
273
+
"outputs": [],
274
+
"source": [
275
+
"# Add your questions here!\n",
276
+
"# chain.invoke(\"<your question>\")"
277
+
]
278
+
},
279
+
{
280
+
"cell_type": "markdown",
281
+
"metadata": {},
282
+
"source": [
283
+
"You now have a fully functioning RAG pipeline! Note that there are several different ways to accomplish this, depending on your input data format, vector store, embedding, model, output type, and more. There are also more advanced RAG techniques that leverage new ingestion, retrieval, and generation patterns. \n",
284
+
"\n",
285
+
"RAG is a powerful solution used in tandem with the capabilities of LLMs. Check out our other examples for ideas on how you can build innovative solutions using RAGStack!"
0 commit comments