Replies: 1 comment 1 reply
-
Investigation GraphRAG Hybrid Support + NODES2025 Announcements🎉 NODES2025 GraphRAG AnnouncementsNeo4j made significant GraphRAG announcements for NODES 2025: 1. Neo4j Aura Agent - Early Access Program (October 2, 2025)Neo4j launched Aura Agent as a no-code/low-code GraphRAG platform:
Quote from blog:
2. GraphRAG Pattern Catalog - https://graphrag.com/Neo4j published a comprehensive pattern catalog:
3. neo4j-graphrag Python Package v1.10.0 (September 4, 2025)Official package with long-term support:
✅ Answers to Your Questions1. Database FlexibilityYES - Works seamlessly across all Neo4j deployment types: # Local Docker (Development)
driver = GraphDatabase.driver("bolt://localhost:7687", auth=("neo4j", "password"))
# Neo4j Aura (Production)
driver = GraphDatabase.driver("neo4j+s://6b870b04.databases.neo4j.io", auth=(user, pass))
# Neo4j Enterprise (On-Premises)
driver = GraphDatabase.driver("neo4j://enterprise-server:7687", auth=(user, pass))
# Same GraphRAG code works with all three
retriever = VectorCypherRetriever(driver=driver, ...)Confirmed: The package uses the standard Neo4j Python driver, so it's 100% environment-agnostic. 2. LLM FlexibilityYES - Supports multiple LLM backends via plugin architecture: Built-in Support:
Custom LLM Implementation (for BitNet): from neo4j_graphrag.llm import LLMInterface, LLMResponse
class BitNetLLM(LLMInterface):
def __init__(self, endpoint: str = "http://localhost:8001"):
super().__init__(model_name="bitnet-b1.58-2b4t")
self.endpoint = endpoint
def invoke(self, input: str, **kwargs) -> LLMResponse:
response = requests.post(
f"{self.endpoint}/generate",
json={"prompt": input, "max_tokens": 512}
)
return LLMResponse(content=response.json()["text"])
# Use it
llm = BitNetLLM()
rag = GraphRAG(retriever=retriever, llm=llm)Environment-Aware Factory Pattern: def get_llm():
if os.getenv("DEPLOYMENT_ENV") == "production":
return AzureOpenAILLM(
model="gpt-4o-mini",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
else:
return BitNetLLM(endpoint="http://localhost:8001")3. Embedding Model PortabilityYES - Full flexibility with embeddings: Local SentenceTransformers (Development): from neo4j_graphrag.embeddings import SentenceTransformerEmbeddings
embedder = SentenceTransformerEmbeddings(model="all-MiniLM-L6-v2")
retriever = VectorCypherRetriever(driver=driver, embedder=embedder, ...)Azure OpenAI Embeddings (Production): from neo4j_graphrag.embeddings import OpenAIEmbeddings
embedder = OpenAIEmbeddings(
model="text-embedding-3-small",
api_key=os.getenv("AZURE_OPENAI_KEY")
)
retriever = VectorCypherRetriever(driver=driver, embedder=embedder, ...)Mixed Environments: ✅ Yes, possible!
Important: The
It doesn't care where embeddings came from originally! 4. Entity Extraction Cost ControlYES - Full control over where entity extraction happens: Local Entity Extraction + Cloud Sync: # Step 1: Extract entities locally (on-premises)
local_pipeline = SimpleKGPipeline(
llm=OllamaLLM(model="llama3"), # Free, local
driver=local_driver,
entities=["Technology", "Concept", "Author"]
)
# Process PDFs locally
for pdf in sensitive_documents:
local_pipeline.run_async(file_path=pdf)
# Step 2: Sync graph structure to cloud (Neo4j Aura)
# Export graph, push only structure (no sensitive text)
with local_driver.session() as session:
entities = session.run("MATCH (e:Entity) RETURN e")
# Push to Aura (only entity names, relationships)Hybrid Approach: def extract_entities(document, is_sensitive=False):
if is_sensitive:
# Local processing with Ollama/BitNet
llm = OllamaLLM(model="llama3")
driver = local_driver
else:
# Cloud processing with Azure OpenAI
llm = AzureOpenAILLM(model="gpt-4o-mini")
driver = aura_driver
pipeline = SimpleKGPipeline(llm=llm, driver=driver, ...)
pipeline.run(text=document.content)Cost Control:
5. Network & Connectivity (Air-Gapped Support)YES - Fully offline capable: Required Components (all can run offline):
Zero External Dependencies Configuration: # 1. Local Neo4j
driver = GraphDatabase.driver("bolt://localhost:7687")
# 2. Local embeddings (models cached in ~/.cache/torch)
embedder = SentenceTransformerEmbeddings(model="all-MiniLM-L6-v2")
# 3. Local LLM
llm = OllamaLLM(model="llama3", base_url="http://localhost:11434")
# 4. GraphRAG pipeline (100% local)
retriever = VectorCypherRetriever(driver=driver, embedder=embedder, ...)
rag = GraphRAG(retriever=retriever, llm=llm)
# No internet required after initial setup!Air-Gapped Setup Process:
🔗 Additional ResourcesOfficial Documentation:
Books & Guides:
NODES 2025:
🎯 Summary: Perfect Fit for Your Use CaseBased on your requirements:
Recommendation: Neo4j GraphRAG is ideal for hybrid deployments. It's designed exactly for your use case! 📦 Quick Start for Our Project# Install
pip install neo4j-graphrag[openai,sentence-transformers]==1.10.0
# Test with your existing Aura instance
python -c "
from neo4j import GraphDatabase
from neo4j_graphrag.retrievers import VectorCypherRetriever
from neo4j_graphrag.embeddings import SentenceTransformerEmbeddings
driver = GraphDatabase.driver(
'neo4j+s://6b870b04.databases.neo4j.io',
auth=('neo4j', 'your-password')
)
embedder = SentenceTransformerEmbeddings(model='all-MiniLM-L6-v2')
retriever = VectorCypherRetriever(
driver=driver,
index_name='text_embeddings',
embedder=embedder
)
results = retriever.search(query_text='What is Neo4j?', top_k=3)
print(f'Found {len(results.items)} results')
" |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Neo4j GraphRAG for Hybrid Environments (Azure Cloud + Local/On-Prem)
🎯 Question
Is Neo4j GraphRAG Python package suitable for hybrid deployments that combine:
📋 Context
Our project (neo4j-agentframework) currently implements a flexible hybrid architecture:
Current Setup
Key Requirements
🤔 Specific Questions
1. Database Flexibility
Can GraphRAG work seamlessly with:
Use Case: Developer tests locally with Docker, deploys same code to Azure Aura production.
2. LLM Flexibility
Does GraphRAG support multiple LLM backends for hybrid scenarios:
Current Challenge: We use Azure AI Foundry in cloud, BitNet locally. Can GraphRAG adapt?
3. Embedding Model Portability
We currently use SentenceTransformers (all-MiniLM-L6-v2) locally to avoid API costs:
Question: Can GraphRAG
VectorCypherRetrieverwork with:4. Entity Extraction Cost Control
Our concern about cloud costs:
Question: Can we run entity extraction locally and sync the resulting knowledge graph to cloud Neo4j?
5. Network & Connectivity
For air-gapped or restricted environments:
Critical for: Government, healthcare, financial services with data residency requirements.
💡 Proposed Hybrid Architecture
📊 Why This Matters
Cost Savings
Compliance & Sovereignty
Developer Experience
🔗 References
🤝 Community Input Welcome
Has anyone successfully deployed GraphRAG in hybrid environments? Looking for:
Tags:
hybrid-cloud,on-premises,azure,data-sovereignty,cost-optimizationBeta Was this translation helpful? Give feedback.
All reactions