|
| 1 | +--- |
| 2 | +title: VectorShift |
| 3 | +--- |
| 4 | + |
| 5 | +[VectorShift](https://vectorshift.ai/) is an integrated framework of no-code, low-code, and out of the box generative AI solutions |
| 6 | +to build AI search engines, assistants, chatbots, and automations. |
| 7 | + |
| 8 | +VectorShift's platform allows you to design, prototype, build, deploy, |
| 9 | +and manage generative AI workflows and automation across two interfaces: no-code and code SDK. |
| 10 | +This hands-on demonstration uses the no-code interface to walk you through creating a VectorShift pipeline project. This project |
| 11 | +enables you to use GPT-4o-mini to chat in real time with a PDF document that is processed by Unstructured and has its processed data stored in a |
| 12 | +[Pinecone](https://www.pinecone.io/) vector database. |
| 13 | + |
| 14 | +This video provides a general introduction to VectorShift pipeline projects: |
| 15 | + |
| 16 | +<iframe |
| 17 | +width="560" |
| 18 | +height="315" |
| 19 | +src="https://www.youtube.com/embed/_ToXPwOW2bY" |
| 20 | +title="YouTube video player" |
| 21 | +frameborder="0" |
| 22 | +allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" |
| 23 | +allowfullscreen |
| 24 | +></iframe> |
| 25 | + |
| 26 | +## Prerequisites |
| 27 | + |
| 28 | +<iframe |
| 29 | +width="560" |
| 30 | +height="315" |
| 31 | +src="https://www.youtube.com/embed/Li0yhaeguYQ" |
| 32 | +title="YouTube video player" |
| 33 | +frameborder="0" |
| 34 | +allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" |
| 35 | +allowfullscreen |
| 36 | +></iframe> |
| 37 | + |
| 38 | +import PineconeShared from '/snippets/general-shared-text/pinecone.mdx'; |
| 39 | + |
| 40 | +<PineconeShared /> |
| 41 | + |
| 42 | +Also: |
| 43 | + |
| 44 | +- [Sign up for an OpenAI account](https://platform.openai.com/signup), and [get your OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key). |
| 45 | +- [Sign up for a VectorShift Starter account](https://app.vectorshift.ai/api/signup). |
| 46 | +- [Sign up for an Unstructured Platform account through the For Developers page](/platform/quickstart). |
| 47 | + |
| 48 | +## Create and run the demonstration project |
| 49 | + |
| 50 | +<Steps> |
| 51 | + <Step title="Get source data into Pinecone"> |
| 52 | + Although you can use any [supported file type](/platform/supported-file-types) or data in any |
| 53 | + [supported source type](/platform/sources/overview) for the input into Pinecone, this demonstration uses [the text of the United States Constitution in PDF format](https://constitutioncenter.org/media/files/constitution.pdf). |
| 54 | + |
| 55 | + 1. Sign in to your Unstructured Platform account. |
| 56 | + 2. [Create a source connector](/platform/sources/overview), if you do not already have one, to connect Unstructured to the source location where the PDF file is stored. |
| 57 | + 3. [Create a Pinecone destination connector](/platform/destinations/pinecone), if you do not already have one, to connect Unstructured to your Pinecone serverless index. |
| 58 | + 4. [Create a workflow](/platform/workflows#create-a-workflow) that references this source connector and destination connector. |
| 59 | + 5. [Run the workflow](/platform/workflows#edit-delete-or-run-a-workflow). |
| 60 | + </Step> |
| 61 | + <Step title="Create the VectorShift project"> |
| 62 | + 1. Sign in to your VectorShift account dashboard. |
| 63 | + 2. On the sidebar, click **Pipelines**. |
| 64 | + 3. Click **New**. |
| 65 | + 4. Click **Create Pipeline from Scratch**. |
| 66 | + |
| 67 | +  |
| 68 | + |
| 69 | + </Step> |
| 70 | + <Step title="Add the Input node"> |
| 71 | + In this step, you add a node to the pipeline. This node takes user-supplied chat messages and sends them as input to Pinecone, and as input to a text-based LLM, for contextual searching. |
| 72 | + |
| 73 | + In the top pipeline node chooser bar, on the **General** tab, click **Input**. |
| 74 | + |
| 75 | +  |
| 76 | + |
| 77 | + </Step> |
| 78 | + <Step title="Add the Pinecone node"> |
| 79 | + In this step, you add a node that connects to the Pinecone serverless index. |
| 80 | + |
| 81 | + 1. In the top pipeline node chooser bar, on the **Integrations** tab, click **Pinecone**. |
| 82 | + 2. In the **Pinecone** node, for **Embedding Model**, select **openai/text-embedding-3-large**. |
| 83 | + 3. Click **Connected Account**. |
| 84 | + 4. In the **Select Pinecone Account** dialog, click **Connect New**. |
| 85 | + 5. Enter the **API Key** and **Region** for your Pinecone serverless index, and then click **Save**. |
| 86 | + 6. For **Index**, selet the name of your Pinecone serverless index. |
| 87 | + 7. Connect the **input_1** output from the **Input** node to the **query** input in the **Pinecone** node. |
| 88 | + |
| 89 | + To make the connection, click and hold your mouse pointer inside of the circle next to **input_1** in the **Input** node. |
| 90 | + While holding your mouse pointer, drag it over into the circle next to **query** in the **Pinecone** node. Then |
| 91 | + release your mouse pointer. A line appears between these two circles. |
| 92 | + |
| 93 | +  |
| 94 | + |
| 95 | + </Step> |
| 96 | + <Step title="Add the OpenAI LLM node"> |
| 97 | + In this step, you add a node that builds a prompt and then sends it to a text-based LLM. |
| 98 | + |
| 99 | + 1. In the top pipeline node chooser bar, on the **LLMs** tab, click **OpenAI**. |
| 100 | + 2. In the **OpenAI LLM** node, for **System**, enter the following text: |
| 101 | + |
| 102 | + ``` |
| 103 | + Answer the Question based on Context. Use Memory when relevant. |
| 104 | + ``` |
| 105 | + |
| 106 | + 3. For **Prompt**, enter the following text: |
| 107 | + |
| 108 | + ``` |
| 109 | + Question: {{Question}} |
| 110 | + Context: {{Context}} |
| 111 | + Memory: {{Memory}} |
| 112 | + ``` |
| 113 | + |
| 114 | + 4. For **Model**, select **gpt-4o-mini**. |
| 115 | + 5. Check the box titled **Use Personal API Key**. |
| 116 | + 6. For **API Key**, enter your OpenAI API key. |
| 117 | + 7. Connect the **input_1** output from the **Input** node to the **Question** input in the **OpenAI LLM** node. |
| 118 | + 8. Connect the **output** output from the **Pinecone** node to the **Context** input in the **OpenAI LLM** node. |
| 119 | + |
| 120 | +  |
| 121 | + |
| 122 | + </Step> |
| 123 | + <Step title="Add the Chat Memory node"> |
| 124 | + In this step, you add a node that adds chat memory to the session. |
| 125 | + |
| 126 | + 1. In the top pipeline node chooser bar, on the **Chat** tab, click **Chat Memory**. |
| 127 | + 2. Connect the output from the **Chat Memory** node to the **Memory** input in the **OpenAI LLM** node. |
| 128 | + |
| 129 | +  |
| 130 | + |
| 131 | + </Step> |
| 132 | + <Step title="Add the Output node"> |
| 133 | + In this step, you add a node that displays the chat output. |
| 134 | + |
| 135 | + 1. In the top pipeline node chooser bar, on the **General** tab, click **Output**. |
| 136 | + 2. Connect the **response** output from the **OpenAI LLM** node to the input in the **Output** node. |
| 137 | + |
| 138 | +  |
| 139 | + |
| 140 | + </Step> |
| 141 | + <Step title="Run the project"> |
| 142 | + 1. In the upper corner of the pipeline designer, click the play (**Run Pipeline**) button. |
| 143 | + |
| 144 | +  |
| 145 | + |
| 146 | + 2. In the chat pane, on the **Chatbot** tab, enter a question into the **Message Assistant** box, for example, `What rights does the fifth amendment guarantee?` Then press the send button. |
| 147 | + |
| 148 | +  |
| 149 | + |
| 150 | + 3. Wait until the answer appears. |
| 151 | + 4. Ask as many additional questions as you want to. |
| 152 | + </Step> |
| 153 | +</Steps> |
| 154 | + |
| 155 | +## Learn more |
| 156 | + |
| 157 | +See the [VectorShift documentation](https://docs.vectorshift.ai/). |
0 commit comments