Skip to content

Commit 01f4f36

Browse files
committed
DA-767: Modified: Rename CouchbaseVectorStore to CouchbaseSearchVectorStore
1 parent ca84339 commit 01f4f36

17 files changed

+67
-67
lines changed

tutorial/markdown/generated/vector-search-cookbook/CouchbaseStorage_Demo.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ from couchbase.cluster import Cluster
140140
from couchbase.options import ClusterOptions
141141
from couchbase.auth import PasswordAuthenticator
142142
from couchbase.diagnostics import PingState, ServiceType
143-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
143+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
144144
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
145145
import time
146146
import json
@@ -380,7 +380,7 @@ class CouchbaseStorage(RAGStorage):
380380
self.index_name = os.getenv('INDEX_NAME', 'vector_search_crew')
381381

382382
# Initialize vector store
383-
self.vector_store = CouchbaseVectorStore(
383+
self.vector_store = CouchbaseSearchVectorStore(
384384
cluster=self.cluster,
385385
bucket_name=self.bucket_name,
386386
scope_name=self.scope_name,

tutorial/markdown/generated/vector-search-cookbook/RAG_with_Couchbase_and_AzureOpenAI.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ from langchain_core.output_parsers import StrOutputParser
9393
from langchain_core.prompts import ChatPromptTemplate
9494
from langchain_core.runnables import RunnablePassthrough
9595
from langchain_couchbase.cache import CouchbaseCache
96-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
96+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
9797
from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings
9898
from tqdm import tqdm
9999
```
@@ -404,7 +404,7 @@ The vector store is set up to manage the embeddings created in the previous step
404404

405405
```python
406406
try:
407-
vector_store = CouchbaseVectorStore(
407+
vector_store = CouchbaseSearchVectorStore(
408408
cluster=cluster,
409409
bucket_name=CB_BUCKET_NAME,
410410
scope_name=SCOPE_NAME,
@@ -495,7 +495,7 @@ except Exception as e:
495495
# Perform Semantic Search
496496
Semantic search in Couchbase involves converting queries and documents into vector representations using an embeddings model. These vectors capture the semantic meaning of the text and are stored directly in Couchbase. When a query is made, Couchbase performs a similarity search by comparing the query vector against the stored document vectors. The similarity metric used for this comparison is configurable, allowing flexibility in how the relevance of documents is determined. Common metrics include cosine similarity, Euclidean distance, or dot product, but other metrics can be implemented based on specific use cases. Different embedding models like BERT, Word2Vec, or GloVe can also be used depending on the application's needs, with the vectors generated by these models stored and searched within Couchbase itself.
497497

498-
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
498+
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseSearchVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
499499

500500

501501
```python

tutorial/markdown/generated/vector-search-cookbook/RAG_with_Couchbase_and_Bedrock.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ from langchain_core.output_parsers import StrOutputParser
9797
from langchain_core.prompts.chat import ChatPromptTemplate
9898
from langchain_core.runnables import RunnablePassthrough
9999
from langchain_couchbase.cache import CouchbaseCache
100-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
100+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
101101
from tqdm import tqdm
102102
```
103103

@@ -386,7 +386,7 @@ A vector store is where we'll keep our embeddings. Unlike the FTS index, which i
386386

387387
```python
388388
try:
389-
vector_store = CouchbaseVectorStore(
389+
vector_store = CouchbaseSearchVectorStore(
390390
cluster=cluster,
391391
bucket_name=CB_BUCKET_NAME,
392392
scope_name=SCOPE_NAME,
@@ -554,7 +554,7 @@ except Exception as e:
554554
# Perform Semantic Search
555555
Semantic search in Couchbase involves converting queries and documents into vector representations using an embeddings model. These vectors capture the semantic meaning of the text and are stored directly in Couchbase. When a query is made, Couchbase performs a similarity search by comparing the query vector against the stored document vectors. The similarity metric used for this comparison is configurable, allowing flexibility in how the relevance of documents is determined. Common metrics include cosine similarity, Euclidean distance, or dot product, but other metrics can be implemented based on specific use cases. Different embedding models like BERT, Word2Vec, or GloVe can also be used depending on the application's needs, with the vectors generated by these models stored and searched within Couchbase itself.
556556

557-
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
557+
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseSearchVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
558558

559559

560560
```python

tutorial/markdown/generated/vector-search-cookbook/RAG_with_Couchbase_and_Claude(by_Anthropic).md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ from langchain_core.prompts.chat import (ChatPromptTemplate,
9898
SystemMessagePromptTemplate)
9999
from langchain_core.runnables import RunnablePassthrough
100100
from langchain_couchbase.cache import CouchbaseCache
101-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
101+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
102102
from langchain_openai import OpenAIEmbeddings
103103
```
104104

@@ -392,7 +392,7 @@ A vector store is where we'll keep our embeddings. Unlike the FTS index, which i
392392

393393
```python
394394
try:
395-
vector_store = CouchbaseVectorStore(
395+
vector_store = CouchbaseSearchVectorStore(
396396
cluster=cluster,
397397
bucket_name=CB_BUCKET_NAME,
398398
scope_name=SCOPE_NAME,
@@ -540,7 +540,7 @@ except Exception as e:
540540
# Perform Semantic Search
541541
Semantic search in Couchbase involves converting queries and documents into vector representations using an embeddings model. These vectors capture the semantic meaning of the text and are stored directly in Couchbase. When a query is made, Couchbase performs a similarity search by comparing the query vector against the stored document vectors. The similarity metric used for this comparison is configurable, allowing flexibility in how the relevance of documents is determined.
542542

543-
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
543+
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseSearchVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
544544

545545

546546
```python

tutorial/markdown/generated/vector-search-cookbook/RAG_with_Couchbase_and_Cohere.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ from langchain_core.output_parsers import StrOutputParser
9696
from langchain_core.prompts import ChatPromptTemplate
9797
from langchain_core.runnables import RunnablePassthrough
9898
from langchain_couchbase.cache import CouchbaseCache
99-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
99+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
100100
```
101101

102102
# Setup Logging
@@ -386,7 +386,7 @@ The vector store is set up to manage the embeddings created in the previous step
386386

387387
```python
388388
try:
389-
vector_store = CouchbaseVectorStore(
389+
vector_store = CouchbaseSearchVectorStore(
390390
cluster=cluster,
391391
bucket_name=CB_BUCKET_NAME,
392392
scope_name=SCOPE_NAME,
@@ -531,7 +531,7 @@ except Exception as e:
531531
# Perform Semantic Search
532532
Semantic search in Couchbase involves converting queries and documents into vector representations using an embeddings model. These vectors capture the semantic meaning of the text and are stored directly in Couchbase. When a query is made, Couchbase performs a similarity search by comparing the query vector against the stored document vectors. The similarity metric used for this comparison is configurable, allowing flexibility in how the relevance of documents is determined.
533533

534-
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
534+
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseSearchVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
535535

536536

537537
```python

tutorial/markdown/generated/vector-search-cookbook/RAG_with_Couchbase_and_CrewAI.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ from couchbase.options import ClusterOptions
9898
from datasets import load_dataset
9999
from dotenv import load_dotenv
100100
from crewai.tools import tool
101-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
101+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
102102
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
103103

104104
from crewai import Agent, Crew, Process, Task
@@ -440,7 +440,7 @@ A vector store is where we'll keep our embeddings. Unlike the FTS index, which i
440440

441441
```python
442442
# Setup vector store
443-
vector_store = CouchbaseVectorStore(
443+
vector_store = CouchbaseSearchVectorStore(
444444
cluster=cluster,
445445
bucket_name=CB_BUCKET_NAME,
446446
scope_name=SCOPE_NAME,

tutorial/markdown/generated/vector-search-cookbook/RAG_with_Couchbase_and_Jina_AI.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ from langchain_core.prompts import ChatPromptTemplate
9797
from langchain_core.prompts.chat import ChatPromptTemplate
9898
from langchain_core.runnables import RunnablePassthrough
9999
from langchain_couchbase.cache import CouchbaseCache
100-
from langchain_couchbase.vectorstores import CouchbaseVectorStore
100+
from langchain_couchbase.vectorstores import CouchbaseSearchVectorStore
101101
```
102102

103103
# Setup Logging
@@ -394,7 +394,7 @@ A vector store is where we'll keep our embeddings. Unlike the FTS index, which i
394394

395395
```python
396396
try:
397-
vector_store = CouchbaseVectorStore(
397+
vector_store = CouchbaseSearchVectorStore(
398398
cluster=cluster,
399399
bucket_name=CB_BUCKET_NAME,
400400
scope_name=SCOPE_NAME,
@@ -559,7 +559,7 @@ except Exception as e:
559559
## Perform Semantic Search
560560
Semantic search in Couchbase involves converting queries and documents into vector representations using an embeddings model. These vectors capture the semantic meaning of the text and are stored directly in Couchbase. When a query is made, Couchbase performs a similarity search by comparing the query vector against the stored document vectors. The similarity metric used for this comparison is configurable, allowing flexibility in how the relevance of documents is determined.
561561

562-
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
562+
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseSearchVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
563563

564564
### Note on Retry Mechanism
565565
The search implementation includes a retry mechanism to handle rate limiting and API errors gracefully. If a rate limit error (HTTP 429) is encountered, the system will automatically retry the request up to 3 times with exponential backoff, waiting 2 seconds initially and doubling the wait time between each retry. This helps manage API usage limits while maintaining service reliability. For other types of errors, such as payment requirements or general failures, appropriate error messages and troubleshooting steps are provided to help diagnose and resolve the issue.

0 commit comments

Comments
 (0)