A back-end simulation of a Netflix API endpoint (get movies by list of movie_ids)
-
Database Indexing
-
Adding a Redis cache atop Postgres database
-
Integrating clustering on server that uses round-robin load balancing to distribute requests among server instances
- Node.js
- Express
- Postgresql
- Redis cache
- Amazon Web Services
- Artillery.io (For load testing)
- New Relic (for performance monitoring)
======
900 RPS
7900 MS latency910 RPS
7900 MS latency1600 RPS
2000 MS latency1400 RPS
300 MS latency======
590 RPS
13700 MS latency590 RPS
13700 MS latency900 RPS
6000 MS latencyAfter Indexing, Adding Redis cache, and server clustering (since my configuration of clustering replicates additional server based on number of CPU cores, as AWS EC2 has only one CPU core, no additional server instances were replicated)
900 RPS
6500 MS latency