High-Performance Hotel Search API built with Java 21, Spring Boot 3, and a Multi-Level Caching strategy (L1 Caffeine + L2 Redis), orchestrated on Kubernetes.
This project demonstrates a microservices architecture designed for extreme low latency and high resilience.
Instead of hitting the database for every request, we use a "Cache-Aside" pattern with two layers:
graph TD
Client["Client / Browser"] -->|HTTP GET /search| LB["Load Balancer / Service"]
LB --> App["Atlas Search App"]
subgraph "Application Pod (JVM)"
App -->|"1. Check L1"| Caffeine["L1 Cache (Caffeine/RAM)"]
end
subgraph "Cluster Infrastructure"
App -->|"2. Check L2 (if L1 miss)"| Redis["L2 Cache (Redis)"]
App -->|"3. Query DB (if L2 miss)"| DB[("PostgreSQL")]
end
Caffeine -.->|"Hit (Microseconds)"| App
Redis -.->|"Hit (Milliseconds)"| App
DB -.->|"Miss (Slow)"| App
- L1 (Caffeine): In-memory. Instant access. Holds "hot" data.
- L2 (Redis): Distributed. Survives app restarts. Shared across pods.
- Database (Postgres): Source of truth. Only queried on cache miss.
- Core: Java 21, Spring Boot 3.4
- Data: PostgreSQL, QueryDSL (Type-safe SQL)
- Caching: Caffeine (Local), Redis (Distributed)
- Infrastructure: Kubernetes (Kind), Helm, Docker
- Observability: Prometheus, Grafana, Micrometer, OpenTelemetry
- Docs: OpenAPI (Swagger UI)
We use a Makefile to automate the entire lifecycle of the project, ensuring a consistent environment and preventing "works on my machine" issues.
- Make (Pre-installed on macOS/Linux)
- Docker Desktop
- Kind (Kubernetes in Docker)
- Helm (Package Manager for K8s)
- Kubectl
To create the cluster, build the app, and deploy the entire stack (Database, Cache, Monitoring, and Application) in one go:
make upWait until all pods are Running (check with kubectl get pods -A).
If you prefer to run steps individually to understand the process:
Creates a multi-node cluster (1 Control Plane + 2 Workers) and ensures port mapping.
make create-clusterDeploys Postgres and Redis with Persistent Volumes.
make install-platformInstalls Prometheus and Grafana using Helm. Note: This must run before the app to register ServiceMonitors.
make install-monitoringUses a Dockerized Build process. This compiles the Java code inside a container (Java 21) and builds the final image, ensuring compatibility regardless of your local Java version.
make build-appSideloads the image into Kind and applies the Kubernetes manifests.
make install-appOnce the stack is up, you can access the application.
Access the API documentation and test endpoints directly: 👉 http://localhost:8080/atlas-search/swagger-ui/index.html
curl -X GET "http://localhost:8080/atlas-search/hotels/search?city=Lisbon&minStars=4" -H "accept: application/json"To view the dashboards, we need to access Grafana inside the cluster.
1. Get Admin Password:
kubectl get secret --namespace monitoring atlas-monitoring-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo2. Open Tunnel:
kubectl port-forward svc/atlas-monitoring-grafana 3000:80 -n monitoring3. Login:
- URL: http://localhost:3000
- User:
admin - Pass: (The output from step 1)
4. View Dashboards:
Navigate to Dashboards > Spring Boot 3.x Statistics (or import dashboard ID 19004) to see Real-Time Heap, CPU, and
Prometheus metrics.
To destroy the cluster and free up resources:
make down