Skip to content

Commit e3dc389

Browse files
committed
Update Blog “part-8-agentic-ai-and-qdrant-building-semantic-memory-with-mcp-protocol”
1 parent 55a671f commit e3dc389

File tree

1 file changed

+14
-13
lines changed

1 file changed

+14
-13
lines changed

content/blog/part-8-agentic-ai-and-qdrant-building-semantic-memory-with-mcp-protocol.md

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Part 8: Agentic AI and Qdrant: Building Semantic Memory with MCP Protocol"
2+
title: "Part 8: Agentic AI and Qdrant: Building semantic Memory with MCP protocol"
33
date: 2025-07-21T10:50:25.839Z
44
author: Dinesh R Singh
55
authorimage: /img/dinesh-192-192.jpg
@@ -35,7 +35,7 @@ This is solved by wrapping Qdrant inside an MCP server, giving agents a semantic
3535

3636
### Architecture Overview
3737

38-
```
38+
```cwl
3939
[LLM Agent]
4040
|
4141
|-- [MCP Client]
@@ -60,13 +60,13 @@ Imagine an AI assistant answering support queries.
6060

6161
### Step 1: Launch Qdrant MCP Server
6262

63-
```
63+
```cwl
6464
export COLLECTION_NAME="support-tickets"
6565
export QDRANT_LOCAL_PATH="./qdrant_local_db"
6666
export EMBEDDING_MODEL="sentence-transformers/all-MiniLM-L6-v2"
6767
```
6868

69-
```
69+
```cwl
7070
uvx mcp-server-qdrant --transport sse
7171
```
7272

@@ -78,7 +78,7 @@ uvx mcp-server-qdrant --transport sse
7878

7979
### Step 2: Connect the MCP Client
8080

81-
```
81+
```python
8282
from mcp import ClientSession, StdioServerParameters
8383
from mcp.client.stdio import stdio_client
8484
async def main():
@@ -93,21 +93,21 @@ server_params = StdioServerParameters(
9393
)
9494
```
9595

96-
```
96+
```python
9797
async with stdio_client(server_params) as (read, write):
9898
     async with ClientSession(read, write) as session:
9999
         await session.initialize()
100100
         tools = await session.list_tools()
101101
         print(tools)
102102
```
103103

104-
```
104+
```cwl
105105
Expected Output: Lists tools like qdrant-store, qdrant-find
106106
```
107107

108108
### Step 3: Ingest a New Memory
109109

110-
```
110+
```python
111111
ticket_info = "Order #1234 was delayed due to heavy rainfall in transit zone."
112112
result = await session.call_tool("qdrant-store", arguments={
113113
"information": ticket_info,
@@ -119,7 +119,7 @@ This stores an embedded version of the text in Qdrant.
119119

120120
### Step 4: Perform a Semantic Search
121121

122-
```
122+
```python
123123
query = "Why was order 1234 delayed?"
124124
search_response = await session.call_tool("qdrant-find", arguments={
125125
"query": "order 1234 delay"
@@ -128,7 +128,7 @@ search_response = await session.call_tool("qdrant-find", arguments={
128128

129129
## Example Output:
130130

131-
```
131+
```cwl
132132
[
133133
  {
134134
"content": "Order #1234 was delayed due to heavy rainfall in transit zone.",
@@ -139,7 +139,7 @@ search_response = await session.call_tool("qdrant-find", arguments={
139139

140140
### Step 5: Use with LLM
141141

142-
```
142+
```python
143143
import openai
144144
context = "\n".join(\[r["content"] for r in search_response])
145145
prompt = f"""
@@ -156,8 +156,9 @@ messages=[{"role": "user", "content": prompt}]
156156
print(response\["choices"]\[0]\["message"]\["content"])
157157
```
158158

159-
```
160-
Final Answer:
159+
### Final Answer:
160+
161+
```cwl
161162
"Order #1234 was delayed due to heavy rainfall in the transit zone."
162163
```
163164

0 commit comments

Comments
 (0)