-
Notifications
You must be signed in to change notification settings - Fork 168
docs: Updated README.md #550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThe README.md file was updated to revise the example usage of the Qdrant client with FastEmbed. The new example demonstrates the use of the Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
README.md (4)
254-254: Clarify in-memory initialization.
Depending on the client version, the canonical way is often using the keyword argument for location, e.g., QdrantClient(location=":memory:"). Consider updating the comment to show the keyword form to avoid ambiguity with URL-based init.-# client = QdrantClient(":memory:") # For experimentation +# client = QdrantClient(location=":memory:") # For experimentation
255-261: Minor consistency nit: consider naming and payload duplication trade-off.
Using payload to store the original text is good for result rendering. For readability, consider aligning naming so docs mirrors payload:-model_name = "sentence-transformers/all-MiniLM-L6-v2" -payload = [ - {"document": "Qdrant has Langchain integrations", "source": "Langchain-docs", }, - {"document": "Qdrant also has Llama Index integrations", "source": "LlamaIndex-docs"}, -] -docs = [models.Document(text=data["document"], model=model_name) for data in payload] +model_name = "sentence-transformers/all-MiniLM-L6-v2" +payload = [ + {"text": "Qdrant has Langchain integrations", "source": "Langchain-docs"}, + {"text": "Qdrant also has Llama Index integrations", "source": "LlamaIndex-docs"}, +] +docs = [models.Document(text=it["text"], model=model_name) for it in payload]If you prefer to keep the key as "document", consider a brief comment explaining it is intentionally duplicated into payload for display.
263-267: Make the example copy-paste safe and simplify vector params.
- Consider recreate_collection to avoid errors on repeated runs.
- Some client versions provide a helper to derive vector params; if available, it makes the snippet simpler.
-client.create_collection( - "demo_collection", - vectors_config=models.VectorParams( - size=client.get_embedding_size(model_name), distance=models.Distance.COSINE) -) +client.recreate_collection( + collection_name="demo_collection", + vectors_config=models.VectorParams( + size=client.get_embedding_size(model_name), + distance=models.Distance.COSINE, + ), +)Optionally (if supported in your client version):
# vectors_config = client.get_fastembed_vector_params(model_name) # client.recreate_collection("demo_collection", vectors_config=vectors_config)
276-279: Consider showing limit to make output concise and predictable.
Adding limit illustrates ranked retrieval and keeps the output small in examples.-search_result = client.query_points( +search_result = client.query_points( collection_name="demo_collection", query=models.Document(text="This is a query document", model=model_name) -).points + , limit=3 +).points
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
- GitHub Check: Python 3.13.x on macos-latest test
- GitHub Check: Python 3.13.x on windows-latest test
- GitHub Check: Python 3.13.x on ubuntu-latest test
- GitHub Check: Python 3.12.x on macos-latest test
- GitHub Check: Python 3.11.x on windows-latest test
- GitHub Check: Python 3.10.x on macos-latest test
- GitHub Check: Python 3.11.x on macos-latest test
- GitHub Check: Python 3.11.x on ubuntu-latest test
- GitHub Check: Python 3.12.x on windows-latest test
- GitHub Check: Python 3.12.x on ubuntu-latest test
- GitHub Check: Python 3.10.x on windows-latest test
- GitHub Check: Python 3.9.x on windows-latest test
- GitHub Check: Python 3.10.x on ubuntu-latest test
- GitHub Check: Python 3.9.x on ubuntu-latest test
- GitHub Check: Python 3.9.x on macos-latest test
🔇 Additional comments (1)
README.md (1)
249-249: Import usage looks correct and aligns with the newer model-aware client API.
Description
Updated the README.md as per https://github.com/qdrant/qdrant-client?tab=readme-ov-file#local-inference-with-fastembed.