A comprehensive learning project demonstrating distributed systems concepts using BullMQ, Redis Streams, and Kafka with Node.js microservices architecture.
Built with TypeScript - All services are written in TypeScript for type safety and better developer experience.
This project teaches:
- How distributed queues work
- Differences between Redis Queue, Redis Streams, and Kafka
- Building microservices with message passing
- Consumer groups and load distribution
- Job retries, dead letter queues, and backpressure
- Horizontal scaling of worker instances
- Real-world backend task processing patterns
The system consists of 6 microservices and 1 frontend:
- api-service - HTTP REST API for booking requests
- queue-service - BullMQ queue management
- stream-service - Redis Streams operations
- kafka-service - Kafka producer/consumer management
- worker-service - Job processors for all queue types
- monitoring-service - Dashboard and metrics API
- frontend - React + TypeScript web interface
See ARCHITECTURE.md for detailed architecture documentation.
- Docker and Docker Compose
- Node.js 18+ (for local development)
- TypeScript 5.3+ (for local development)
- Basic understanding of microservices
cd distributed-system/booking-queuedocker-compose up -d redis zookeeper kafkaWait for services to be healthy (about 30 seconds).
docker-compose up --buildThis will start:
- Redis (port 6379)
- Zookeeper (port 2181)
- Kafka (ports 9092, 9093)
- API Service (port 3000)
- Queue Service (port 3001)
- Stream Service (port 3002)
- Kafka Service (port 3003)
- Worker Service (2 instances)
- Monitoring Service (port 3004)
- Frontend (port 5173)
Open your browser and navigate to:
http://localhost:5173
The frontend provides:
- π Dashboard - Real-time monitoring of all queues
- β Create Booking - Create bookings via web form
- π Check Status - View booking status across all queues
Use the web interface at http://localhost:5173 to create bookings and monitor the system.
curl -X POST http://localhost:3000/api/bookings \
-H "Content-Type: application/json" \
-d '{
"userId": "user-123",
"hotelId": "hotel-456",
"checkIn": "2024-01-15",
"checkOut": "2024-01-20",
"guests": 2,
"roomType": "deluxe"
}'Response:
{
"bookingId": "abc-123-def",
"queues": {
"bullmq": { "jobId": "...", "status": "queued" },
"redisStreams": { "messageId": "...", "status": "queued" },
"kafka": { "messageId": "...", "status": "queued" }
}
}curl http://localhost:3000/api/bookings/{bookingId}/statuscurl -X POST http://localhost:3000/api/bookings/delayed \
-H "Content-Type: application/json" \
-d '{
"userId": "user-123",
"hotelId": "hotel-456",
"checkIn": "2024-01-15",
"checkOut": "2024-01-20",
"delaySeconds": 10
}'curl http://localhost:3004/dashboardOr open in browser: http://localhost:3004/dashboard
GET /health- Health checkGET /dashboard- Complete dashboard with all queue statsGET /queue/bullmq- BullMQ queue detailsGET /queue/streams- Redis Streams detailsGET /services/health- All service health status
GET /health- Health checkPOST /jobs- Add job to queuePOST /jobs/delayed- Add delayed jobGET /jobs/:jobId- Get job statusGET /stats- Queue statisticsGET /jobs/state/:state- Get jobs by state (waiting/active/completed/failed/delayed)
GET /health- Health checkPOST /messages- Add message to streamGET /messages/:messageId- Get message statusGET /stats- Stream statisticsGET /messages- Read recent messages
GET /health- Health checkPOST /messages- Send message to Kafka topicGET /messages/:messageId- Get message metadataGET /stats- Topic statistics
How it works:
- Jobs stored in Redis with structured data
- Workers poll Redis for available jobs
- Automatic retry with exponential backoff
- Built-in progress tracking
- Dead letter queue for failed jobs
Example:
// Producer
const job = await queue.add('booking', data, {
attempts: 3,
backoff: { type: 'exponential', delay: 2000 }
});
// Worker
const worker = new Worker('booking-queue', async (job) => {
await job.updateProgress(50);
// Process job...
return result;
});Key Features:
- β Automatic retries
- β Progress tracking
- β Delayed jobs
- β Job priorities
- β Rate limiting
How it works:
- Messages appended to stream using
XADD - Consumer groups read messages with
XREADGROUP - Messages must be ACK'd after processing
- Un-ACK'd messages go to Pending Entry List (PEL)
- Each consumer group maintains its own offset
Example:
// Producer
await redis.xadd('booking-stream', '*', 'data', JSON.stringify(data));
// Consumer
const messages = await redis.xreadgroup(
'GROUP', 'workers', 'consumer-1',
'COUNT', 10,
'BLOCK', 5000,
'STREAMS', 'booking-stream', '>'
);
// After processing
await redis.xack('booking-stream', 'workers', messageId);Key Features:
- β Consumer groups for load distribution
- β Message ordering
- β Pending Entry List for retries
- β Multiple consumer groups per stream
How it works:
- Topics divided into partitions
- Producers write to partitions
- Consumers in groups read from partitions
- Each partition maintains offset per consumer group
- Messages persisted to disk, replicated
Example:
// Producer
await producer.send({
topic: 'booking-events',
messages: [{ key: id, value: JSON.stringify(data) }]
});
// Consumer
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
// Process message
// Offset automatically committed
}
});Key Features:
- β High throughput
- β Durability (disk persistence)
- β Partition-level ordering
- β Message replay
- β Consumer groups
All three queue systems support horizontal scaling:
# Scale workers in docker-compose.yml
worker-service:
deploy:
replicas: 5 # Run 5 worker instancesMultiple workers automatically share the same queue. Jobs are distributed round-robin.
# Each worker is a consumer in the same group
# Messages are distributed across consumersMultiple consumers in the same group automatically share messages.
# Multiple consumers in same consumer group
# Partitions distributed across consumersIf you have 3 partitions and 3 consumers, each consumer gets 1 partition.
- BullMQ: Round-robin job distribution
- Redis Streams: Messages distributed across consumers in group
- Kafka: Partitions distributed across consumers (1 partition = 1 consumer max)
- Built-in exponential backoff
- Configurable attempts and delays
- Automatic retry on failure
{
attempts: 3,
backoff: {
type: 'exponential',
delay: 2000 // 2s, 4s, 8s
}
}- Manual retry via Pending Entry List (PEL)
- Messages not ACK'd go to PEL
- Workers can claim and retry from PEL
// Check PEL
const pending = await redis.xpending('stream', 'group');
// Claim and retry
await redis.xclaim('stream', 'group', 'consumer', 60000, messageId);- Consumer offset management
- Failed messages can be sent to DLQ topic
- Manual retry by replaying from offset
- Automatic DLQ support
- Failed jobs after max retries moved to DLQ
- Separate stream for failed messages
- Messages moved to DLQ stream after max retries
- Separate topic for failed messages
- Producer sends failed messages to DLQ topic
# Create job
curl -X POST http://localhost:3001/jobs \
-H "Content-Type: application/json" \
-d '{"type": "booking", "data": {"test": true}}'
# Check status
curl http://localhost:3001/jobs/{jobId}
# View stats
curl http://localhost:3001/stats# Add message
curl -X POST http://localhost:3002/messages \
-H "Content-Type: application/json" \
-d '{"type": "booking", "data": {"test": true}}'
# Read messages
curl http://localhost:3002/messages
# View stats
curl http://localhost:3002/stats# Send message
curl -X POST http://localhost:3003/messages \
-H "Content-Type: application/json" \
-d '{"type": "booking", "data": {"test": true}}'
# View stats
curl http://localhost:3003/statsbooking-queue/
βββ api-service/ # HTTP API for booking requests
β βββ src/
β β βββ index.js
β βββ package.json
β βββ Dockerfile
βββ queue-service/ # BullMQ queue management
β βββ src/
β β βββ index.js
β βββ package.json
β βββ Dockerfile
βββ stream-service/ # Redis Streams operations
β βββ src/
β β βββ index.js
β βββ package.json
β βββ Dockerfile
βββ kafka-service/ # Kafka producer/consumer
β βββ src/
β β βββ index.js
β βββ package.json
β βββ Dockerfile
βββ worker-service/ # Job processors
β βββ src/
β β βββ index.js
β βββ package.json
β βββ Dockerfile
βββ monitoring-service/ # Dashboard and metrics
β βββ src/
β β βββ index.js
β βββ package.json
β βββ Dockerfile
βββ docker-compose.yml # All services configuration
βββ ARCHITECTURE.md # Detailed architecture docs
βββ README.md # This file
- Task queues with job processing
- Need built-in retries and backoff
- Progress tracking required
- Delayed/scheduled jobs
- Rate limiting needed
- Event streaming
- Simple message passing
- Need consumer groups
- Lightweight solution
- Already using Redis
- High throughput requirements
- Need message durability
- Event sourcing
- Log aggregation
- Multiple consumer groups
- Need message replay
# Check logs
docker-compose logs [service-name]
# Restart specific service
docker-compose restart [service-name]# Test Redis connection
docker exec -it booking-redis redis-cli ping# Wait for Kafka to be healthy
docker-compose ps kafka
# Check Kafka logs
docker-compose logs kafka- Check worker logs:
docker-compose logs worker-service - Verify Redis/Kafka connections
- Check queue/message counts in monitoring dashboard
# Terminal 1: Start Redis
redis-server
# Terminal 2: Start Kafka (requires separate setup)
# Terminal 3: Start services (TypeScript)
cd api-service && npm install && npm run dev
cd queue-service && npm install && npm run dev
# ... etc
# Or build and run
cd api-service && npm install && npm run build && npm startEach service uses environment variables (see .env files or docker-compose.yml):
REDIS_HOST- Redis hostnameREDIS_PORT- Redis portKAFKA_BROKER- Kafka broker addressWORKER_ID- Unique worker identifier
- Message Passing: Services communicate via queues, not direct calls
- Consumer Groups: Multiple workers share load
- Retry Strategies: Exponential backoff, PEL, offset management
- Dead Letter Queues: Handle permanently failed jobs
- Horizontal Scaling: Add more workers to increase throughput
- Progress Tracking: Monitor job execution progress
- Ordering: Partition-level ordering in Kafka, stream ordering in Redis Streams
This is a learning project. Feel free to:
- Add more job types
- Implement additional features
- Improve error handling
- Add more monitoring capabilities
This project is for educational purposes.
Happy Learning! π