|
| 1 | +--- |
| 2 | +categories: |
| 3 | +- docs |
| 4 | +- develop |
| 5 | +- stack |
| 6 | +- oss |
| 7 | +- rs |
| 8 | +- rc |
| 9 | +- oss |
| 10 | +- kubernetes |
| 11 | +- clients |
| 12 | +description: Index and query embeddings with Redis vector sets |
| 13 | +linkTitle: Vector set embeddings |
| 14 | +title: Vector set embeddings |
| 15 | +weight: 40 |
| 16 | +bannerText: Vector set is a new data type that is currently in preview and may be subject to change. |
| 17 | +bannerChildren: true |
| 18 | +--- |
| 19 | + |
| 20 | +A Redis [vector set]({{< relref "/develop/data-types/vector-sets" >}}) lets |
| 21 | +you store a set of unique keys, each with its own associated vector. |
| 22 | +You can then retrieve keys from the set according to the similarity between |
| 23 | +their stored vectors and a query vector that you specify. |
| 24 | + |
| 25 | +You can use vector sets to store any type of numeric vector but they are |
| 26 | +particularly optimized to work with text embedding vectors (see |
| 27 | +[Redis for AI]({{< relref "/develop/ai" >}}) to learn more about text |
| 28 | +embeddings). The example below shows how to generate vector embeddings and then |
| 29 | +store and retrieve them using a vector set with `Lettuce`. |
| 30 | + |
| 31 | +## Initialize |
| 32 | + |
| 33 | +If you are using [Maven](https://maven.apache.org/), add the following |
| 34 | +dependencies to your `pom.xml` file |
| 35 | +(note that you need `Lettuce` v6.8.0 or later to use vector sets): |
| 36 | + |
| 37 | +```xml |
| 38 | +<dependency> |
| 39 | + <groupId>io.lettuce</groupId> |
| 40 | + <artifactId>lettuce-core</artifactId> |
| 41 | + <version>6.8.0.RELEASE</version> |
| 42 | +</dependency> |
| 43 | + |
| 44 | +<dependency> |
| 45 | + <groupId>ai.djl.huggingface</groupId> |
| 46 | + <artifactId>tokenizers</artifactId> |
| 47 | + <version>0.33.0</version> |
| 48 | +</dependency> |
| 49 | + |
| 50 | +<dependency> |
| 51 | + <groupId>ai.djl.pytorch</groupId> |
| 52 | + <artifactId>pytorch-model-zoo</artifactId> |
| 53 | + <version>0.33.0</version> |
| 54 | +</dependency> |
| 55 | + |
| 56 | +<dependency> |
| 57 | + <groupId>ai.djl</groupId> |
| 58 | + <artifactId>api</artifactId> |
| 59 | + <version>0.33.0</version> |
| 60 | +</dependency> |
| 61 | +``` |
| 62 | + |
| 63 | +If you are using [Gradle](https://gradle.org/), add the following |
| 64 | +dependencies to your `build.gradle` file: |
| 65 | + |
| 66 | +```bash |
| 67 | +compileOnly 'io.lettuce:lettuce-core:6.8.0.RELEASE' |
| 68 | +compileOnly 'ai.djl.huggingface:tokenizers:0.33.0' |
| 69 | +compileOnly 'ai.djl.pytorch:pytorch-model-zoo:0.33.0' |
| 70 | +compileOnly 'ai.djl:api:0.33.0' |
| 71 | +``` |
| 72 | + |
| 73 | +In a new Java file, import the required classes: |
| 74 | + |
| 75 | +{{< clients-example set="home_vecsets" step="import" lang_filter="Java-Async,Java-Reactive" >}} |
| 76 | +{{< /clients-example >}} |
| 77 | + |
| 78 | +The imports include the classes required to generate embeddings from text. |
| 79 | +This example uses an instance of the `Predictor` class with the |
| 80 | +[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) |
| 81 | +model for the embeddings. This model generates vectors with 384 dimensions, regardless |
| 82 | +of the length of the input text, but note that the input is truncated to 256 |
| 83 | +tokens (see |
| 84 | +[Word piece tokenization](https://huggingface.co/learn/nlp-course/en/chapter6/6) |
| 85 | +at the [Hugging Face](https://huggingface.co/) docs to learn more about the way tokens |
| 86 | +are related to the original text). |
| 87 | + |
| 88 | +{{< clients-example set="home_vecsets" step="model" lang_filter="Java-Async,Java-Reactive" >}} |
| 89 | +{{< /clients-example >}} |
| 90 | + |
| 91 | +## Create the data |
| 92 | + |
| 93 | +The example data is contained in a `List<Person>` object with some brief |
| 94 | +descriptions of famous people. |
| 95 | + |
| 96 | +{{< clients-example set="home_vecsets" step="data" lang_filter="Java-Async,Java-Reactive" >}} |
| 97 | +{{< /clients-example >}} |
| 98 | + |
| 99 | +## Add the data to a vector set |
| 100 | + |
| 101 | +The next step is to connect to Redis and add the data to a new vector set. |
| 102 | + |
| 103 | +The `predictor.predict()` method that generates the embeddings returns a `float[]` array. |
| 104 | +The `vadd()` method that adds the embeddings to the vector set accepts a `Double[]` array, |
| 105 | +so it is useful to define a helper method to perform the conversion: |
| 106 | + |
| 107 | +{{< clients-example set="home_vecsets" step="helper_method" lang_filter="Java-Async,Java-Reactive" >}} |
| 108 | +{{< /clients-example >}} |
| 109 | + |
| 110 | +The code below connects to Redis, then iterates through all the items in the `people` list, |
| 111 | +generates embeddings for each person's description, and then |
| 112 | +adds the appropriate elements to a vector set called `famousPeople`. |
| 113 | +Note that the `predict()` call is in a `try`/`catch` block because it can throw |
| 114 | +exceptions if it can't download the embedding model (you should add code to handle |
| 115 | +the exceptions for production). |
| 116 | + |
| 117 | +The call to `vadd()` also adds the `born` and `died` values from the |
| 118 | +original `people` list as attribute data. You can access this during a query |
| 119 | +or by using the [`vgetattr()`]({{< relref "/commands/vgetattr" >}}) method. |
| 120 | + |
| 121 | +{{< clients-example set="home_vecsets" step="add_data" lang_filter="Java-Async,Java-Reactive" >}} |
| 122 | +{{< /clients-example >}} |
| 123 | + |
| 124 | +## Query the vector set |
| 125 | + |
| 126 | +You can now query the data in the set. The basic approach is to use the |
| 127 | +`predict()` method to generate another embedding vector for the query text. |
| 128 | +(This is the same method used to add the elements to the set.) Then, pass |
| 129 | +the query vector to [`vsim()`]({{< relref "/commands/vsim" >}}) to return elements |
| 130 | +of the set, ranked in order of similarity to the query. |
| 131 | + |
| 132 | +Start with a simple query for "actors": |
| 133 | + |
| 134 | +{{< clients-example set="home_vecsets" step="basic_query" lang_filter="Java-Async,Java-Reactive" >}} |
| 135 | +{{< /clients-example >}} |
| 136 | + |
| 137 | +This returns the following list of elements (formatted slightly for clarity): |
| 138 | + |
| 139 | +``` |
| 140 | +['Masako Natsume', 'Chaim Topol', 'Linus Pauling', |
| 141 | +'Marie Fredriksson', 'Maryam Mirzakhani', 'Marie Curie', |
| 142 | +'Freddie Mercury', 'Paul Erdos'] |
| 143 | +``` |
| 144 | + |
| 145 | +The first two people in the list are the two actors, as expected, but none of the |
| 146 | +people from Linus Pauling onward was especially well-known for acting (and there certainly |
| 147 | +isn't any information about that in the short description text). |
| 148 | +As it stands, the search attempts to rank all the elements in the set, based |
| 149 | +on the information contained in the embedding model. |
| 150 | +You can use the `count` parameter of `vsim()` to limit the list of elements |
| 151 | +to just the most relevant few items: |
| 152 | + |
| 153 | +{{< clients-example set="home_vecsets" step="limited_query" lang_filter="Java-Async,Java-Reactive" >}} |
| 154 | +{{< /clients-example >}} |
| 155 | + |
| 156 | +The reason for using text embeddings rather than simple text search |
| 157 | +is that the embeddings represent semantic information. This allows a query |
| 158 | +to find elements with a similar meaning even if the text is |
| 159 | +different. For example, the word "entertainer" doesn't appear in any of the |
| 160 | +descriptions, but if you use it as a query, the actors and musicians are ranked |
| 161 | +highest in the results list: |
| 162 | + |
| 163 | +{{< clients-example set="home_vecsets" step="entertainer_query" lang_filter="Java-Async,Java-Reactive" >}} |
| 164 | +{{< /clients-example >}} |
| 165 | + |
| 166 | +Similarly, if you use "science" as a query, you get the following results: |
| 167 | + |
| 168 | +``` |
| 169 | +['Marie Curie', 'Linus Pauling', 'Maryam Mirzakhani', |
| 170 | +'Paul Erdos', 'Marie Fredriksson', 'Freddie Mercury', 'Masako Natsume', |
| 171 | +'Chaim Topol'] |
| 172 | +``` |
| 173 | + |
| 174 | +The scientists are ranked highest, followed by the |
| 175 | +mathematicians. This ranking seems reasonable given the connection between mathematics and science. |
| 176 | + |
| 177 | +You can also use |
| 178 | +[filter expressions]({{< relref "/develop/data-types/vector-sets/filtered-search" >}}) |
| 179 | +with `vsim()` to restrict the search further. For example, |
| 180 | +repeat the "science" query, but this time limit the results to people |
| 181 | +who died before the year 2000: |
| 182 | + |
| 183 | +{{< clients-example set="home_vecsets" step="filtered_query" lang_filter="Java-Async,Java-Reactive" >}} |
| 184 | +{{< /clients-example >}} |
| 185 | + |
| 186 | +Note that the boolean filter expression is applied to items in the list |
| 187 | +before the vector distance calculation is performed. Items that don't |
| 188 | +pass the filter test are removed from the results completely, rather |
| 189 | +than just reduced in rank. This can help to improve the performance of the |
| 190 | +search because there is no need to calculate the vector distance for |
| 191 | +elements that have already been filtered out of the search. |
| 192 | + |
| 193 | +## More information |
| 194 | + |
| 195 | +See the [vector sets]({{< relref "/develop/data-types/vector-sets" >}}) |
| 196 | +docs for more information and code examples. See the |
| 197 | +[Redis for AI]({{< relref "/develop/ai" >}}) section for more details |
| 198 | +about text embeddings and other AI techniques you can use with Redis. |
| 199 | + |
| 200 | +You may also be interested in |
| 201 | +[vector search]({{< relref "/develop/clients/lettuce/vecsearch" >}}). |
| 202 | +This is a feature of the |
| 203 | +[Redis query engine]({{< relref "/develop/ai/search-and-query" >}}) |
| 204 | +that lets you retrieve |
| 205 | +[JSON]({{< relref "/develop/data-types/json" >}}) and |
| 206 | +[hash]({{< relref "/develop/data-types/hashes" >}}) documents based on |
| 207 | +vector data stored in their fields. |
0 commit comments