Goal: run GraphDB, the FastAPI backend, and the Next.js frontend; load the unified demo KG and generate embeddings. Keep it simple.
- Docker & Docker Compose
- Python 3.12,
uv(optional, recommended) - (Optional) Ollama installed locally for embeddings (
nomic-embed-text)
git clone https://github.com/<your-org>/GCPU_grape.git
cd GCPU_grape-
Backend env:
cp apps/backend/.env.example apps/backend/.env
Edit
apps/backend/.envand set your keys:GCP_PROJECT_ID="your-id"VERTEX_AI_LOCATION=us-central1- Ensure
CORS_ORIGINSincludes your frontend origin. If you need our cloud values for a staged demo, contact us.
-
Frontend env (only if backend isn’t on localhost):
echo "NEXT_PUBLIC_API_URL=http://<backend-host>:8000" > apps/web/.env
you can install all the project by simply run :
make runThis will: start the stack, load the demo KG into unified, pull nomic-embed-text, generate embeddings, and show URLs.
you can setup manually or if there are any problems by following the steps below.
docker-compose -f docker-compose.graphdb.yml up -dGraphDB UI: http://localhost:7200
cd apps/backend
./install.sh
# the script prints the next stepsFrom the repo root:
bash scripts/refresh_unified_demo.shThis clears and reloads the demo TTLs into the unified repository.
Quick sanity check:
curl -G -H 'Accept: application/sparql-results+json' \
--data-urlencode 'query=SELECT ?s ?p ?o WHERE { ?s ?p ?o } LIMIT 5' \
http://localhost:7200/repositories/unifiedPull the model once if not present:
ollama pull nomic-embed-textGenerate embeddings for the unified KG:
python scripts/generate_grape_embeddings.py unified(Checks GraphDB connectivity and Ollama availability.)
cd apps/backend
source .venv/bin/activate
python main.pyAPI: http://localhost:8000 • Docs: http://localhost:8000/docs
If not containerised:
cd apps/web
# ensure NEXT_PUBLIC_API_URL is correct in apps/web/.env
npm install
npm run devFrontend: http://localhost:3000
- GraphDB up on http://localhost:7200
-
bash scripts/refresh_unified_demo.shcompleted -
ollama pull nomic-embed-textdone -
python scripts/generate_grape_embeddings.py unifiedsuccessful - Backend running on http://localhost:8000
- Frontend running on http://localhost:3000 (or your domain)
- GraphDB says “Missing parameter: query” → use
curl -G --data-urlencode 'query=…'as shown. - Frontend can’t reach backend → set
NEXT_PUBLIC_API_URLto the reachable backend URL. - Embeddings script fails → check that GraphDB
unifiedhas data and that the Ollama model was pulled. - Logs:
docker logs -f grape-api docker logs -f grape-graphdb docker logs -f grape-web docker logs -f grape-ollama