diff --git a/content/develop/clients/lettuce/vecsearch.md b/content/develop/clients/lettuce/vecsearch.md
new file mode 100644
index 0000000000..173697a106
--- /dev/null
+++ b/content/develop/clients/lettuce/vecsearch.md
@@ -0,0 +1,272 @@
+---
+categories:
+- docs
+- develop
+- stack
+- oss
+- rs
+- rc
+- oss
+- kubernetes
+- clients
+description: Learn how to index and query vector embeddings with Redis
+linkTitle: Index and query vectors
+title: Index and query vectors
+weight: 3
+---
+
+[Redis Query Engine]({{< relref "/develop/ai/search-and-query" >}})
+lets you index vector fields in [hash]({{< relref "/develop/data-types/hashes" >}})
+or [JSON]({{< relref "/develop/data-types/json" >}}) objects (see the
+[Vectors]({{< relref "/develop/ai/search-and-query/vectors" >}})
+reference page for more information).
+Among other things, vector fields can store *text embeddings*, which are AI-generated vector
+representations of the semantic information in pieces of text. The
+[vector distance]({{< relref "/develop/ai/search-and-query/vectors#distance-metrics" >}})
+between two embeddings indicates how similar they are semantically. By comparing the
+similarity of an embedding generated from some query text with embeddings stored in hash
+or JSON fields, Redis can retrieve documents that closely match the query in terms
+of their meaning.
+
+The example below uses the [HuggingFace](https://huggingface.co/) model
+[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
+to generate the vector embeddings to store and index with Redis Query Engine.
+The code is first demonstrated for hash documents with a
+separate section to explain the
+[differences with JSON documents](#differences-with-json-documents).
+
+## Initialize
+
+If you are using [Maven](https://maven.apache.org/), add the following
+dependencies to your `pom.xml` file:
+
+```xml
+
+ io.lettuce
+ lettuce-core
+
+ 6.7.1.RELEASE
+
+
+
+ ai.djl.huggingface
+ tokenizers
+ 0.33.0
+
+
+
+ ai.djl.pytorch
+ pytorch-model-zoo
+ 0.33.0
+
+
+
+ ai.djl
+ api
+ 0.33.0
+
+```
+
+If you are using [Gradle](https://gradle.org/), add the following
+dependencies to your `build.gradle` file:
+
+```bash
+compileOnly 'io.lettuce:lettuce-core:6.7.1.RELEASE'
+compileOnly 'ai.djl.huggingface:tokenizers:0.33.0'
+compileOnly 'ai.djl.pytorch:pytorch-model-zoo:0.33.0'
+compileOnly 'ai.djl:api:0.33.0'
+```
+
+## Import dependencies
+
+Import the following classes in your source file:
+
+{{< clients-example set="home_query_vec" step="import" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+## Define a helper method
+
+When you store vectors in a hash object, or pass them as query parameters,
+you must encode the `float` components of the vector
+array as a `byte` string. The helper method `floatArrayToByteBuffer()`
+shown below does this for you:
+
+{{< clients-example set="home_query_vec" step="helper_method" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+## Create an embedding model instance
+
+The example below uses the
+[`all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
+model to generate the embeddings. This model generates vectors with 384 dimensions, regardless of the length of the input text, but note that the input is truncated to 256
+tokens (see
+[Word piece tokenization](https://huggingface.co/learn/nlp-course/en/chapter6/6)
+at the [Hugging Face](https://huggingface.co/) docs to learn more about the way tokens
+are related to the original text).
+
+The [`Predictor`](https://javadoc.io/doc/ai.djl/api/latest/ai/djl/inference/Predictor.html)
+class implements the model to generate the embeddings. The code below
+creates an instance of `Predictor` that uses the `all-MiniLM-L6-v2` model:
+
+{{< clients-example set="home_query_vec" step="model" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+## Create the index
+
+As noted in [Define a helper method](#define-a-helper-method) above, you must
+pass the embeddings to the hash and query commands as a binary string.
+
+Lettuce has an option to specify a `ByteBufferCodec` for the connection to Redis.
+This lets you construct binary strings for Redis keys and values conveniently using
+the standard
+[`ByteBuffer`](https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html)
+class (see [Codecs](https://redis.github.io/lettuce/integration-extension/#codecs)
+in the Lettuce documentation for more information). However, you will probably find
+it more convenient to use the default `StringCodec` for commands that don't require binary strings. It is therefore helpful to have two connections available, one using `ByteBufferCodec` and one using `StringCodec`.
+
+The code below shows how to declare one connection with the
+`ByteBufferCodec` and another without in the try-with-resources
+block. You also need two separate instances of `RedisAsyncCommands` to
+use the two connections:
+
+{{< clients-example set="home_query_vec" step="connect" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+Next, create the index.
+The schema in the example below includes three fields:
+
+- The text content to index
+- A [tag]({{< relref "/develop/ai/search-and-query/advanced-concepts/tags" >}})
+ field to represent the "genre" of the text
+- The embedding vector generated from the original text content
+
+The `embedding` field specifies
+[HNSW]({{< relref "/develop/ai/search-and-query/vectors#hnsw-index" >}})
+indexing, the
+[L2]({{< relref "/develop/ai/search-and-query/vectors#distance-metrics" >}})
+vector distance metric, `Float32` values to represent the vector's components,
+and 384 dimensions, as required by the `all-MiniLM-L6-v2` embedding model.
+
+The `CreateArgs` object specifies hash objects for storage and a
+prefix `doc:` that identifies the hash objects to index.
+
+{{< clients-example set="home_query_vec" step="create_index" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+## Add data
+
+You can now supply the data objects, which will be indexed automatically
+when you add them with [`hset()`]({{< relref "/commands/hset" >}}), as long as
+you use the `doc:` prefix specified in the index definition.
+
+Use the `predict()` method of the `Predictor` object
+as shown below to create the embedding that represents the `content` field
+and use the `floatArrayToByteBuffer()` helper method to convert it to a binary string.
+Use the binary string representation when you are
+indexing hash objects, but use an array of `float` for
+JSON objects (see [Differences with JSON objects](#differences-with-json-documents)
+below).
+
+You must use instances of `Map` to supply the data to `hset()`
+when using the `ByteBufferCodec` connection, which adds a little complexity. Note
+that the `predict()` call is in a `try`/`catch` block because it will throw
+exceptions if it can't download the embedding model (you should add code to handle
+the exceptions in production code).
+
+{{< clients-example set="home_query_vec" step="add_data" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+## Run a query
+
+After you have created the index and added the data, you are ready to run a query.
+To do this, you must create another embedding vector from your chosen query
+text. Redis calculates the vector distance between the query vector and each
+embedding vector in the index as it runs the query. You can request the results to be
+sorted to rank them in order of ascending distance.
+
+The code below creates the query embedding using the `predict()` method, as with
+the indexing, and passes it as a parameter when the query executes (see
+[Vector search]({{< relref "/develop/ai/search-and-query/query/vector-search" >}})
+for more information about using query parameters with embeddings).
+The query is a
+[K nearest neighbors (KNN)]({{< relref "/develop/ai/search-and-query/vectors#knn-vector-search" >}})
+search that sorts the results in order of vector distance from the query vector.
+
+{{< clients-example set="home_query_vec" step="query" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+Assuming you have added the code from the steps above to your source file,
+it is now ready to run, but note that it may take a while to complete when
+you run it for the first time (which happens because the model must download the
+`all-MiniLM-L6-v2` model data before it can
+generate the embeddings). When you run the code, it outputs the following result text:
+
+```
+Results:
+ID: doc:1, Content: That is a very happy person, Distance: 0.114169836044
+ID: doc:2, Content: That is a happy dog, Distance: 0.610845506191
+ID: doc:3, Content: Today is a sunny day, Distance: 1.48624765873
+```
+
+Note that the results are ordered according to the value of the `distance`
+field, with the lowest distance indicating the greatest similarity to the query.
+As you would expect, the result for `doc:1` with the content text
+*"That is a very happy person"*
+is the result that is most similar in meaning to the query text
+*"That is a happy person"*.
+
+## Differences with JSON documents
+
+Indexing JSON documents is similar to hash indexing, but there are some
+important differences. JSON allows much richer data modeling with nested fields, so
+you must supply a [path]({{< relref "/develop/data-types/json/path" >}}) in the schema
+to identify each field you want to index. However, you can declare a short alias for each
+of these paths (using the `as()` option) to avoid typing it in full for
+every query. Also, you must specify `CreateArgs.TargetType.JSON` when you create the index.
+
+The code below shows these differences, but the index is otherwise very similar to
+the one created previously for hashes:
+
+{{< clients-example set="home_query_vec" step="json_schema" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+An important difference with JSON indexing is that the vectors are
+specified using arrays of `float` instead of binary strings. This means
+you don't need to use the `ByteBufferCodec` connection, and you can use
+[`Arrays.toString()`](https://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html#toString-float:A-) to convert the `float` array to a suitable JSON string.
+
+Use [`jsonSet()`]({{< relref "/commands/json.set" >}}) to add the data
+instead of [`hset()`]({{< relref "/commands/hset" >}}). Use instances
+of `JSONObject` to supply the data instead of `Map`, as you would for
+hash objects.
+
+{{< clients-example set="home_query_vec" step="json_data" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+The query is almost identical to the one for the hash documents. This
+demonstrates how the right choice of aliases for the JSON paths can
+save you having to write complex queries. An important thing to notice
+is that the vector parameter for the query is still specified as a
+binary string, even though the data for the `embedding` field of the JSON
+was specified as an array.
+
+{{< clients-example set="home_query_vec" step="json_query" lang_filter="Java-Async,Java-Reactive" >}}
+{{< /clients-example >}}
+
+The distance values are not identical to the hash query because the
+string representations of the vectors used here are stored with different
+precisions. However, the relative order of the results is the same:
+
+```
+Results:
+ID: jdoc:1, Content: That is a very happy person, Distance: 0.628328084946
+ID: jdoc:2, Content: That is a happy dog, Distance: 0.895147025585
+ID: jdoc:3, Content: Today is a sunny day, Distance: 1.49569523335
+```
+
+## Learn more
+
+See
+[Vector search]({{< relref "/develop/ai/search-and-query/query/vector-search" >}})
+for more information about the indexing options, distance metrics, and query format
+for vectors.
diff --git a/local_examples/client-specific/lettuce-async/HomeQueryVecExample.java b/local_examples/client-specific/lettuce-async/HomeQueryVecExample.java
new file mode 100644
index 0000000000..f7a445ed71
--- /dev/null
+++ b/local_examples/client-specific/lettuce-async/HomeQueryVecExample.java
@@ -0,0 +1,356 @@
+// EXAMPLE: home_query_vec
+package io.redis.examples.async;
+
+// STEP_START import
+// Lettuce client and query engine classes.
+import io.lettuce.core.*;
+import io.lettuce.core.api.StatefulRedisConnection;
+import io.lettuce.core.api.async.RedisAsyncCommands;
+import io.lettuce.core.search.arguments.*;
+import io.lettuce.core.search.SearchReply;
+import io.lettuce.core.json.JsonParser;
+import io.lettuce.core.json.JsonObject;
+import io.lettuce.core.json.JsonPath;
+
+// Standard library classes for data manipulation and
+// asynchronous programming.
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+import java.util.*;
+import java.util.concurrent.CompletableFuture;
+
+// DJL classes for model loading and inference.
+import ai.djl.huggingface.translator.TextEmbeddingTranslatorFactory;
+import ai.djl.inference.Predictor;
+import ai.djl.repository.zoo.Criteria;
+import ai.djl.training.util.ProgressBar;
+// STEP_END
+// REMOVE_START
+import org.junit.jupiter.api.Test;
+import static org.assertj.core.api.Assertions.assertThat;
+// REMOVE_END
+
+public class HomeQueryVecExample {
+
+ // STEP_START helper_method
+ private ByteBuffer floatArrayToByteBuffer(float[] vector) {
+ ByteBuffer buffer = ByteBuffer.allocate(vector.length * 4).order(ByteOrder.LITTLE_ENDIAN);
+ for (float value : vector) {
+ buffer.putFloat(value);
+ }
+ return (ByteBuffer) buffer.flip();
+ }
+ // STEP_END
+
+ // REMOVE_START
+ @Test
+ // REMOVE_END
+ public void run() {
+ // STEP_START model
+ Predictor predictor = null;
+
+ try {
+ Criteria criteria = Criteria.builder().setTypes(String.class, float[].class)
+ .optModelUrls("djl://ai.djl.huggingface.pytorch/sentence-transformers/all-MiniLM-L6-v2")
+ .optEngine("PyTorch").optTranslatorFactory(new TextEmbeddingTranslatorFactory())
+ .optProgress(new ProgressBar()).build();
+
+ predictor = criteria.loadModel().newPredictor();
+ } catch (Exception e) {
+ // ...
+ }
+ // STEP_END
+
+ // STEP_START connect
+ RedisClient redisClient = RedisClient.create("redis://localhost:6379");
+
+ try (StatefulRedisConnection connection = redisClient.connect();
+ StatefulRedisConnection binConnection = redisClient.connect(new ByteBufferCodec())) {
+ RedisAsyncCommands asyncCommands = connection.async();
+ RedisAsyncCommands binAsyncCommands = binConnection.async();
+ // ...
+ // STEP_END
+ // REMOVE_START
+ asyncCommands.del("doc:1", "doc:2", "doc:3", "jdoc:1", "jdoc:2", "jdoc:3").toCompletableFuture().join();
+
+ asyncCommands.ftDropindex("vector_idx").exceptionally(ex -> null) // Ignore errors if the index doesn't exist.
+ .toCompletableFuture().join();
+
+ asyncCommands.ftDropindex("vector_json_idx").exceptionally(ex -> null) // Ignore errors if the index doesn't exist.
+ .toCompletableFuture().join();
+ // REMOVE_END
+
+ // STEP_START create_index
+ List> schema = Arrays.asList(TextFieldArgs. builder().name("content").build(),
+ TagFieldArgs. builder().name("genre").build(),
+ VectorFieldArgs. builder().name("embedding").hnsw().type(VectorFieldArgs.VectorType.FLOAT32)
+ .dimensions(384).distanceMetric(VectorFieldArgs.DistanceMetric.L2).build());
+
+ CreateArgs createArgs = CreateArgs. builder().on(CreateArgs.TargetType.HASH)
+ .withPrefix("doc:").build();
+
+ CompletableFuture createIndex = asyncCommands.ftCreate("vector_idx", createArgs, schema)
+ // REMOVE_START
+ .thenApply(result -> {
+ System.out.println(result); // >>> OK
+
+ assertThat(result).isEqualTo("OK");
+ return result;
+ })
+ // REMOVE_END
+ .thenAccept(System.out::println).toCompletableFuture();
+ // STEP_END
+ createIndex.join();
+
+ // STEP_START add_data
+ String sentence1 = "That is a very happy person";
+
+ Map doc1 = new HashMap<>();
+ doc1.put(ByteBuffer.wrap("content".getBytes()), ByteBuffer.wrap(sentence1.getBytes()));
+ doc1.put(ByteBuffer.wrap("genre".getBytes()), ByteBuffer.wrap("persons".getBytes()));
+
+ try {
+ doc1.put(ByteBuffer.wrap("embedding".getBytes()), floatArrayToByteBuffer(predictor.predict(sentence1)));
+ } catch (Exception e) {
+ // ...
+ }
+
+ CompletableFuture addDoc1 = binAsyncCommands.hset(ByteBuffer.wrap("doc:1".getBytes()), doc1)
+ .thenApply(result -> {
+ // REMOVE_START
+ assertThat(result).isEqualTo(3L);
+ // REMOVE_END
+ System.out.println(result); // >>> 3
+ return result;
+ }).toCompletableFuture();
+
+ String sentence2 = "That is a happy dog";
+
+ Map doc2 = new HashMap<>();
+ doc2.put(ByteBuffer.wrap("content".getBytes()), ByteBuffer.wrap(sentence2.getBytes()));
+ doc2.put(ByteBuffer.wrap("genre".getBytes()), ByteBuffer.wrap("pets".getBytes()));
+
+ try {
+ doc2.put(ByteBuffer.wrap("embedding".getBytes()), floatArrayToByteBuffer(predictor.predict(sentence2)));
+ } catch (Exception e) {
+ // ...
+ }
+
+ CompletableFuture addDoc2 = binAsyncCommands.hset(ByteBuffer.wrap("doc:2".getBytes()), doc2)
+ .thenApply(result -> {
+ // REMOVE_START
+ assertThat(result).isEqualTo(3L);
+ // REMOVE_END
+ System.out.println(result); // >>> 3
+ return result;
+ }).toCompletableFuture();
+
+ String sentence3 = "Today is a sunny day";
+
+ Map doc3 = new HashMap<>();
+ doc3.put(ByteBuffer.wrap("content".getBytes()), ByteBuffer.wrap(sentence3.getBytes()));
+ doc3.put(ByteBuffer.wrap("genre".getBytes()), ByteBuffer.wrap("weather".getBytes()));
+
+ try {
+ doc3.put(ByteBuffer.wrap("embedding".getBytes()), floatArrayToByteBuffer(predictor.predict(sentence3)));
+ } catch (Exception e) {
+ // ...
+ }
+
+ CompletableFuture addDoc3 = binAsyncCommands.hset(ByteBuffer.wrap("doc:3".getBytes()), doc3)
+ .thenApply(result -> {
+ // REMOVE_START
+ assertThat(result).isEqualTo(3L);
+ // REMOVE_END
+ System.out.println(result); // >>> 3
+ return result;
+ }).toCompletableFuture();
+ // STEP_END
+ CompletableFuture.allOf(addDoc1, addDoc2, addDoc3).join();
+
+ // STEP_START query
+ String query = "That is a happy person";
+ float[] queryEmbedding = null;
+
+ try {
+ queryEmbedding = predictor.predict(query);
+ } catch (Exception e) {
+ // ...
+ }
+
+ SearchArgs searchArgs = SearchArgs. builder()
+ .param(ByteBuffer.wrap("vec".getBytes()), floatArrayToByteBuffer(queryEmbedding))
+ .returnField(ByteBuffer.wrap("content".getBytes()))
+ .returnField(ByteBuffer.wrap("vector_distance".getBytes()))
+ .sortBy(SortByArgs. builder().attribute(ByteBuffer.wrap("vector_distance".getBytes())).build())
+ .build();
+
+ CompletableFuture> hashQuery = binAsyncCommands
+ .ftSearch(ByteBuffer.wrap("vector_idx".getBytes()),
+ ByteBuffer.wrap("*=>[KNN 3 @embedding $vec AS vector_distance]".getBytes()), searchArgs)
+ .thenApply(result -> {
+ List> results = result.getResults();
+
+ results.forEach(r -> {
+ String id = StandardCharsets.UTF_8.decode(r.getId()).toString();
+ String content = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("content".getBytes()))).toString();
+ String distance = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("vector_distance".getBytes()))).toString();
+
+ System.out.println("ID: " + id + ", Content: " + content + ", Distance: " + distance);
+ });
+ // >>> ID: doc:1, Content: That is a very happy person, Distance: 0.114169836044
+ // >>> ID: doc:2, Content: That is a happy dog, Distance: 0.610845506191
+ // >>> ID: doc:3, Content: Today is a sunny day, Distance: 1.48624765873
+
+ // REMOVE_START
+ assertThat(result.getCount()).isEqualTo(3);
+ // REMOVE_END
+ return result;
+ }).toCompletableFuture();
+ // STEP_END
+ hashQuery.join();
+
+ // STEP_START json_schema
+ List> jsonSchema = Arrays.asList(
+ TextFieldArgs. builder().name("$.content").as("content").build(),
+ TagFieldArgs. builder().name("$.genre").as("genre").build(),
+ VectorFieldArgs. builder().name("$.embedding").as("embedding").hnsw()
+ .type(VectorFieldArgs.VectorType.FLOAT32).dimensions(384)
+ .distanceMetric(VectorFieldArgs.DistanceMetric.L2).build());
+
+ CreateArgs jsonCreateArgs = CreateArgs. builder().on(CreateArgs.TargetType.JSON)
+ .withPrefix("jdoc:").build();
+
+ CompletableFuture jsonCreateIndex = asyncCommands.ftCreate("vector_json_idx", jsonCreateArgs, jsonSchema)
+ // REMOVE_START
+ .thenApply(result -> {
+ System.out.println(result); // >>> OK
+
+ assertThat(result).isEqualTo("OK");
+ return result;
+ })
+ // REMOVE_END
+ .thenAccept(System.out::println).toCompletableFuture();
+ // STEP_END
+ jsonCreateIndex.join();
+
+ // STEP_START json_data
+ JsonParser parser = asyncCommands.getJsonParser();
+
+ String jSentence1 = "\"That is a very happy person\"";
+
+ JsonObject jDoc1 = parser.createJsonObject();
+ jDoc1.put("content", parser.createJsonValue(jSentence1));
+ jDoc1.put("genre", parser.createJsonValue("\"persons\""));
+
+ try {
+ jDoc1.put("embedding", parser.createJsonValue(Arrays.toString(predictor.predict(jSentence1))));
+ } catch (Exception e) {
+ // ...
+ }
+
+ CompletableFuture jsonAddDoc1 = asyncCommands.jsonSet("jdoc:1", JsonPath.ROOT_PATH, jDoc1)
+ .thenApply(result -> {
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ System.out.println(result); // >>> OK
+ return result;
+ }).toCompletableFuture();
+
+ String jSentence2 = "\"That is a happy dog\"";
+
+ JsonObject jDoc2 = parser.createJsonObject();
+ jDoc2.put("content", parser.createJsonValue(jSentence2));
+ jDoc2.put("genre", parser.createJsonValue("\"pets\""));
+
+ try {
+ jDoc2.put("embedding", parser.createJsonValue(Arrays.toString(predictor.predict(jSentence2))));
+ } catch (Exception e) {
+ // ...
+ }
+
+ CompletableFuture jsonAddDoc2 = asyncCommands.jsonSet("jdoc:2", JsonPath.ROOT_PATH, jDoc2)
+ .thenApply(result -> {
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ System.out.println(result); // >>> OK
+ return result;
+ }).toCompletableFuture();
+
+ String jSentence3 = "\"Today is a sunny day\"";
+
+ JsonObject jDoc3 = parser.createJsonObject();
+ jDoc3.put("content", parser.createJsonValue(jSentence3));
+ jDoc3.put("genre", parser.createJsonValue("\"weather\""));
+
+ try {
+ jDoc3.put("embedding", parser.createJsonValue(Arrays.toString(predictor.predict(jSentence3))));
+ } catch (Exception e) {
+ // ...
+ }
+
+ CompletableFuture jsonAddDoc3 = asyncCommands.jsonSet("jdoc:3", JsonPath.ROOT_PATH, jDoc3)
+ .thenApply(result -> {
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ System.out.println(result); // >>> OK
+ return result;
+ }).toCompletableFuture();
+ // STEP_END
+ CompletableFuture.allOf(jsonAddDoc1, jsonAddDoc2, jsonAddDoc3).join();
+
+ // STEP_START json_query
+ String jQuery = "That is a happy person";
+ float[] jsonQueryEmbedding = null;
+
+ try {
+ jsonQueryEmbedding = predictor.predict(jQuery);
+ } catch (Exception e) {
+ // ...
+ }
+
+ SearchArgs jsonSearchArgs = SearchArgs. builder()
+ .param(ByteBuffer.wrap("vec".getBytes()), floatArrayToByteBuffer(jsonQueryEmbedding))
+ .returnField(ByteBuffer.wrap("content".getBytes()))
+ .returnField(ByteBuffer.wrap("vector_distance".getBytes()))
+ .sortBy(SortByArgs. builder().attribute(ByteBuffer.wrap("vector_distance".getBytes())).build())
+ .build();
+
+ CompletableFuture> jsonQuery = binAsyncCommands
+ .ftSearch(ByteBuffer.wrap("vector_json_idx".getBytes()),
+ ByteBuffer.wrap("*=>[KNN 3 @embedding $vec AS vector_distance]".getBytes()), jsonSearchArgs)
+ .thenApply(result -> {
+ List> results = result.getResults();
+
+ results.forEach(r -> {
+ String id = StandardCharsets.UTF_8.decode(r.getId()).toString();
+ String content = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("content".getBytes()))).toString();
+ String distance = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("vector_distance".getBytes()))).toString();
+
+ System.out.println("ID: " + id + ", Content: " + content + ", Distance: " + distance);
+ });
+ // >>> ID: jdoc:1, Content: "That is a very happy person", Distance:0.628328084946
+ // >>> ID: jdoc:2, Content: That is a happy dog, Distance: 0.895147025585
+ // >>> ID: jdoc:3, Content: "Today is a sunny day", Distance: 1.49569523335
+
+ // REMOVE_START
+ assertThat(result.getCount()).isEqualTo(3);
+ // REMOVE_END
+ return result;
+ }).toCompletableFuture();
+ // STEP_END
+ jsonQuery.join();
+ } finally {
+ redisClient.shutdown();
+ }
+ }
+
+}
diff --git a/local_examples/client-specific/lettuce-reactive/HomeQueryVecExample.java b/local_examples/client-specific/lettuce-reactive/HomeQueryVecExample.java
new file mode 100644
index 0000000000..1fa3ad52dc
--- /dev/null
+++ b/local_examples/client-specific/lettuce-reactive/HomeQueryVecExample.java
@@ -0,0 +1,326 @@
+// EXAMPLE: home_query_vec
+package io.redis.examples.reactive;
+
+// STEP_START import
+// Lettuce client and query engine classes.
+import io.lettuce.core.*;
+import io.lettuce.core.api.StatefulRedisConnection;
+import io.lettuce.core.api.reactive.RedisReactiveCommands;
+import io.lettuce.core.search.arguments.*;
+import io.lettuce.core.search.SearchReply;
+import io.lettuce.core.json.JsonParser;
+import io.lettuce.core.json.JsonObject;
+import io.lettuce.core.json.JsonPath;
+
+// Standard library classes for data manipulation and
+// reactive programming.
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+import java.util.*;
+import reactor.core.publisher.Mono;
+
+// DJL classes for model loading and inference.
+import ai.djl.huggingface.translator.TextEmbeddingTranslatorFactory;
+import ai.djl.inference.Predictor;
+import ai.djl.repository.zoo.Criteria;
+import ai.djl.training.util.ProgressBar;
+// STEP_END
+// REMOVE_START
+import org.junit.jupiter.api.Test;
+import static org.assertj.core.api.Assertions.assertThat;
+// REMOVE_END
+
+public class HomeQueryVecExample {
+
+ // STEP_START helper_method
+ private ByteBuffer floatArrayToByteBuffer(float[] vector) {
+ ByteBuffer buffer = ByteBuffer.allocate(vector.length * 4).order(ByteOrder.LITTLE_ENDIAN);
+ for (float value : vector) {
+ buffer.putFloat(value);
+ }
+ return (ByteBuffer) buffer.flip();
+ }
+ // STEP_END
+
+ // REMOVE_START
+ @Test
+ // REMOVE_END
+ public void run() {
+ // STEP_START model
+ Predictor predictor = null;
+
+ try {
+ Criteria criteria = Criteria.builder().setTypes(String.class, float[].class)
+ .optModelUrls("djl://ai.djl.huggingface.pytorch/sentence-transformers/all-MiniLM-L6-v2")
+ .optEngine("PyTorch").optTranslatorFactory(new TextEmbeddingTranslatorFactory())
+ .optProgress(new ProgressBar()).build();
+
+ predictor = criteria.loadModel().newPredictor();
+ } catch (Exception e) {
+ // ...
+ }
+ // STEP_END
+
+ // STEP_START connect
+ RedisClient redisClient = RedisClient.create("redis://localhost:6379");
+
+ try (StatefulRedisConnection connection = redisClient.connect();
+ StatefulRedisConnection binConnection = redisClient.connect(new ByteBufferCodec())) {
+ RedisReactiveCommands reactiveCommands = connection.reactive();
+ RedisReactiveCommands binReactiveCommands = binConnection.reactive();
+ // ...
+ // STEP_END
+ // REMOVE_START
+ reactiveCommands.del("doc:1", "doc:2", "doc:3", "jdoc:1", "jdoc:2", "jdoc:3").block();
+ reactiveCommands.ftDropindex("vector_idx").onErrorReturn("Index `vector_idx` does not exist").block();
+ reactiveCommands.ftDropindex("vector_json_idx").onErrorReturn("Index `vector_json_idx` does not exist").block();
+ // REMOVE_END
+
+ // STEP_START create_index
+ List> schema = Arrays.asList(TextFieldArgs. builder().name("content").build(),
+ TagFieldArgs. builder().name("genre").build(),
+ VectorFieldArgs. builder().name("embedding").hnsw().type(VectorFieldArgs.VectorType.FLOAT32)
+ .dimensions(384).distanceMetric(VectorFieldArgs.DistanceMetric.L2).build());
+
+ CreateArgs createArgs = CreateArgs. builder().on(CreateArgs.TargetType.HASH)
+ .withPrefix("doc:").build();
+
+ Mono createIndex = reactiveCommands.ftCreate("vector_idx", createArgs, schema).doOnNext(result -> {
+ System.out.println(result); // >>> OK
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ }).then();
+ // STEP_END
+ createIndex.block();
+
+ // STEP_START add_data
+ String sentence1 = "That is a very happy person";
+
+ Map doc1 = new HashMap<>();
+ doc1.put(ByteBuffer.wrap("content".getBytes()), ByteBuffer.wrap(sentence1.getBytes()));
+ doc1.put(ByteBuffer.wrap("genre".getBytes()), ByteBuffer.wrap("persons".getBytes()));
+
+ try {
+ doc1.put(ByteBuffer.wrap("embedding".getBytes()), floatArrayToByteBuffer(predictor.predict(sentence1)));
+ } catch (Exception e) {
+ // ...
+ }
+
+ Mono addDoc1 = binReactiveCommands.hset(ByteBuffer.wrap("doc:1".getBytes()), doc1).doOnNext(result -> {
+ System.out.println(result); // >>> 3
+ // REMOVE_START
+ assertThat(result).isEqualTo(3L);
+ // REMOVE_END
+ });
+
+ String sentence2 = "That is a happy dog";
+
+ Map doc2 = new HashMap<>();
+ doc2.put(ByteBuffer.wrap("content".getBytes()), ByteBuffer.wrap(sentence2.getBytes()));
+ doc2.put(ByteBuffer.wrap("genre".getBytes()), ByteBuffer.wrap("pets".getBytes()));
+
+ try {
+ doc2.put(ByteBuffer.wrap("embedding".getBytes()), floatArrayToByteBuffer(predictor.predict(sentence2)));
+ } catch (Exception e) {
+ // ...
+ }
+
+ Mono addDoc2 = binReactiveCommands.hset(ByteBuffer.wrap("doc:2".getBytes()), doc2).doOnNext(result -> {
+ System.out.println(result); // >>> 3
+ // REMOVE_START
+ assertThat(result).isEqualTo(3L);
+ // REMOVE_END
+ });
+
+ String sentence3 = "Today is a sunny day";
+
+ Map doc3 = new HashMap<>();
+ doc3.put(ByteBuffer.wrap("content".getBytes()), ByteBuffer.wrap(sentence3.getBytes()));
+ doc3.put(ByteBuffer.wrap("genre".getBytes()), ByteBuffer.wrap("weather".getBytes()));
+
+ try {
+ doc3.put(ByteBuffer.wrap("embedding".getBytes()), floatArrayToByteBuffer(predictor.predict(sentence3)));
+ } catch (Exception e) {
+ // ...
+ }
+
+ Mono addDoc3 = binReactiveCommands.hset(ByteBuffer.wrap("doc:3".getBytes()), doc3).doOnNext(result -> {
+ System.out.println(result); // >>> 3
+ // REMOVE_START
+ assertThat(result).isEqualTo(3L);
+ // REMOVE_END
+ });
+ // STEP_END
+ Mono.when(addDoc1, addDoc2, addDoc3).block();
+
+ // STEP_START query
+ String query = "That is a happy person";
+ float[] queryEmbedding = null;
+
+ try {
+ queryEmbedding = predictor.predict(query);
+ } catch (Exception e) {
+ // ...
+ }
+
+ SearchArgs searchArgs = SearchArgs. builder()
+ .param(ByteBuffer.wrap("vec".getBytes()), floatArrayToByteBuffer(queryEmbedding))
+ .returnField(ByteBuffer.wrap("content".getBytes()))
+ .returnField(ByteBuffer.wrap("vector_distance".getBytes()))
+ .sortBy(SortByArgs. builder().attribute(ByteBuffer.wrap("vector_distance".getBytes())).build())
+ .build();
+
+ Mono> hashQuery = binReactiveCommands
+ .ftSearch(ByteBuffer.wrap("vector_idx".getBytes()),
+ ByteBuffer.wrap("*=>[KNN 3 @embedding $vec AS vector_distance]".getBytes()), searchArgs)
+ .doOnNext(result -> {
+ List> results = result.getResults();
+
+ results.forEach(r -> {
+ String id = StandardCharsets.UTF_8.decode(r.getId()).toString();
+ String content = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("content".getBytes()))).toString();
+ String distance = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("vector_distance".getBytes()))).toString();
+
+ System.out.println("ID: " + id + ", Content: " + content + ", Distance: " + distance);
+ });
+ // >>> ID: doc:1, Content: That is a very happy person, Distance: 0.114169836044
+ // >>> ID: doc:2, Content: That is a happy dog, Distance: 0.610845506191
+ // >>> ID: doc:3, Content: Today is a sunny day, Distance: 1.48624765873
+ // REMOVE_START
+ assertThat(result.getCount()).isEqualTo(3);
+ // REMOVE_END
+ });
+ // STEP_END
+ hashQuery.block();
+
+ // STEP_START json_schema
+ List> jsonSchema = Arrays.asList(
+ TextFieldArgs. builder().name("$.content").as("content").build(),
+ TagFieldArgs. builder().name("$.genre").as("genre").build(),
+ VectorFieldArgs. builder().name("$.embedding").as("embedding").hnsw()
+ .type(VectorFieldArgs.VectorType.FLOAT32).dimensions(384)
+ .distanceMetric(VectorFieldArgs.DistanceMetric.L2).build());
+
+ CreateArgs jsonCreateArgs = CreateArgs. builder().on(CreateArgs.TargetType.JSON)
+ .withPrefix("jdoc:").build();
+
+ Mono jsonCreateIndex = reactiveCommands.ftCreate("vector_json_idx", jsonCreateArgs, jsonSchema)
+ .doOnNext(result -> {
+ System.out.println(result); // >>> OK
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ }).then();
+ // STEP_END
+ jsonCreateIndex.block();
+
+ // STEP_START json_data
+ JsonParser parser = reactiveCommands.getJsonParser();
+
+ String jSentence1 = "\"That is a very happy person\"";
+ JsonObject jDoc1 = parser.createJsonObject();
+ jDoc1.put("content", parser.createJsonValue(jSentence1));
+ jDoc1.put("genre", parser.createJsonValue("\"persons\""));
+
+ try {
+ jDoc1.put("embedding", parser.createJsonValue(Arrays.toString(predictor.predict(jSentence1))));
+ } catch (Exception e) {
+ // ...
+ }
+
+ Mono jsonAddDoc1 = reactiveCommands.jsonSet("jdoc:1", JsonPath.ROOT_PATH, jDoc1).doOnNext(result -> {
+ System.out.println(result); // >>> OK
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ }).then();
+
+ String jSentence2 = "\"That is a happy dog\"";
+ JsonObject jDoc2 = parser.createJsonObject();
+ jDoc2.put("content", parser.createJsonValue(jSentence2));
+ jDoc2.put("genre", parser.createJsonValue("\"pets\""));
+
+ try {
+ jDoc2.put("embedding", parser.createJsonValue(Arrays.toString(predictor.predict(jSentence2))));
+ } catch (Exception e) {
+ // ...
+ }
+
+ Mono jsonAddDoc2 = reactiveCommands.jsonSet("jdoc:2", JsonPath.ROOT_PATH, jDoc2).doOnNext(result -> {
+ System.out.println(result); // >>> OK
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ }).then();
+
+ String jSentence3 = "\"Today is a sunny day\"";
+ JsonObject jDoc3 = parser.createJsonObject();
+ jDoc3.put("content", parser.createJsonValue(jSentence3));
+ jDoc3.put("genre", parser.createJsonValue("\"weather\""));
+
+ try {
+ jDoc3.put("embedding", parser.createJsonValue(Arrays.toString(predictor.predict(jSentence3))));
+ } catch (Exception e) {
+ // ...
+ }
+
+ Mono jsonAddDoc3 = reactiveCommands.jsonSet("jdoc:3", JsonPath.ROOT_PATH, jDoc3).doOnNext(result -> {
+ System.out.println(result); // >>> OK
+ // REMOVE_START
+ assertThat(result).isEqualTo("OK");
+ // REMOVE_END
+ }).then();
+ // STEP_END
+ Mono.when(jsonAddDoc1, jsonAddDoc2, jsonAddDoc3).block();
+
+ // STEP_START json_query
+ String jQuery = "That is a happy person";
+ float[] jsonQueryEmbedding = null;
+
+ try {
+ jsonQueryEmbedding = predictor.predict(jQuery);
+ } catch (Exception e) {
+ // ...
+ }
+
+ SearchArgs jsonSearchArgs = SearchArgs. builder()
+ .param(ByteBuffer.wrap("vec".getBytes()), floatArrayToByteBuffer(jsonQueryEmbedding))
+ .returnField(ByteBuffer.wrap("content".getBytes()))
+ .returnField(ByteBuffer.wrap("vector_distance".getBytes()))
+ .sortBy(SortByArgs. builder().attribute(ByteBuffer.wrap("vector_distance".getBytes())).build())
+ .build();
+
+ Mono> jsonQuery = binReactiveCommands
+ .ftSearch(ByteBuffer.wrap("vector_json_idx".getBytes()),
+ ByteBuffer.wrap("*=>[KNN 3 @embedding $vec AS vector_distance]".getBytes()), jsonSearchArgs)
+ .doOnNext(result -> {
+ List> results = result.getResults();
+
+ results.forEach(r -> {
+ String id = StandardCharsets.UTF_8.decode(r.getId()).toString();
+ String content = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("content".getBytes()))).toString();
+ String distance = StandardCharsets.UTF_8
+ .decode(r.getFields().get(ByteBuffer.wrap("vector_distance".getBytes()))).toString();
+
+ System.out.println("ID: " + id + ", Content: " + content + ", Distance: " + distance);
+ });
+ // >>> ID: jdoc:1, Content: "That is a very happy person", Distance: 0.628328084946
+ // >>> ID: jdoc:2, Content: "That is a happy dog", Distance: 0.895147025585
+ // >>> ID: jdoc:3, Content: "Today is a sunny day", Distance: 1.49569523335
+ // REMOVE_START
+ assertThat(result.getCount()).isEqualTo(3);
+ // REMOVE_END
+ });
+ // STEP_END
+ jsonQuery.block();
+ } finally {
+ redisClient.shutdown();
+ }
+ }
+
+}