A Distributed, Fault-Tolerant Movie Theater Seat Reservation System
50.041 - Distributed Systems
- Go Programming Language: Core system implementation
- Raft Consensus Algorithm: Leader election and log replication
- RPC (Remote Procedure Call): Client-server communication
- Go Libraries:
net/rpc: RPC implementationgolang.org/x/exp: Extended Go packages
The Chubby Movie Booking System is a distributed, fault-tolerant seat reservation system that implements the Raft consensus algorithm. The system ensures data consistency across multiple server nodes and maintains availability even when server failures occur. It features automatic leader election, log replication, and client session management for reliable movie seat bookings.
- Distributed Architecture: Multi-server cluster with automatic failover capabilities
- Raft Consensus Algorithm: Leader election and distributed log replication for consistency
- Fault Tolerance: Automatic detection and recovery from server failures
- Session Management: Client session tracking with heartbeat mechanisms
- Seat Reservation: Real-time seat booking and cancellation with conflict resolution
- Load Balancing: Automatic client redirection to current cluster leader
- Concurrent Access: Thread-safe operations supporting multiple simultaneous clients
- Distributed Systems Design: Implemented Raft consensus algorithm for maintaining consistency across distributed nodes
- Concurrent Programming: Used Go's goroutines and channels for handling multiple client sessions and server communications
- Network Programming: Built RPC-based client-server architecture with automatic failover and reconnection
- Fault Tolerance: Designed leader election mechanisms and heartbeat systems for detecting and recovering from node failures
- Data Persistence: Implemented log replication and file-based state management for durability
- System Testing: Created mechanisms for simulating failures and testing system resilience
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client 1 │ │ Client 2 │ │ Client N │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└──────────────────┼──────────────────┘
│
┌───────────▼───────────┐
│ Load Balancer │
│ (Leader Discovery) │
└───────────┬───────────┘
│
┌─────────────────┼─────────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│Server 1 │◄─────►│Server 2 │◄─────►│Server 3 │
│(Follower)│ │(Leader) │ │(Follower)│
└─────────┘ └─────────┘ └─────────┘
│ │ │
└─────────────────┼─────────────────┘
│
┌─────▼─────┐
│Shared Seat│
│ State │
└───────────┘
- Raft Implementation: Leader election, log replication, and consensus
- Session Management: Client session tracking and heartbeat monitoring
- Seat Management: Thread-safe seat reservation and cancellation
- RPC Handler: Processing client requests and responses
- RPC Client: Communication with server cluster
- Session Management: Automatic reconnection and heartbeat sending
- Command Interface: User-friendly booking interface
- Data Structures: Request and Response message formats
- Protocol Definitions: Client-server communication protocol
- Concurrent Client Testing: Multiple clients can simultaneously book seats
- Failure Recovery Testing: System maintains consistency during server failures
- Load Testing: Available in the
Scalability_Testingbranch - Edge Case Handling: Duplicate bookings, network partitions, and split-brain scenarios
Developing this distributed movie booking system provided deep insights into distributed systems challenges. Implementing the Raft consensus algorithm taught the importance of careful state management and timing in distributed environments. Handling network partitions and ensuring data consistency across multiple nodes required robust error handling and recovery mechanisms. The project highlighted the complexity of building fault-tolerant systems while maintaining performance and user experience.
- Go 1.23.2 or higher
- Unix-based system (macOS/Linux recommended)
-
Clone the repository:
git clone [repo-url] cd Movie_Booking_Backend_Chubby-Raft -
Install dependencies:
go mod tidy
-
Start the server cluster:
cd server go run server.goThis will start a 5-node Raft cluster with automatic leader election.
-
Connect clients (in separate terminals):
cd client go run client.go --clientID=client1go run client.go --clientID=client2
Available Commands:
LIST- View all available seats[SeatID] RESERVE- Book a seat (e.g.,1A RESERVE)[SeatID] CANCEL- Cancel a reservation (e.g.,1A CANCEL)
Example Session:
Enter your requests in the format 'SeatID RequestType' (e.g., '1A RESERVE')
> LIST
Available seats: 1A, 1B, 1C, 2A, 2B, 2C, 3A, 3B, 4A, 4B, 4C
> 1A RESERVE
[Client client1] Server response (SUCCESS): Seat 1A reserved successfully
> 1A CANCEL
[Client client1] Server response (SUCCESS): Seat 1A cancelled successfully
To simulate leader failure and test the system's resilience:
-
Uncomment the leader failure simulation in
server.go(lines 905-911):time.Sleep(20 * time.Second) // Start the leader election process // Simulate leader failure fmt.Printf("\n****************************Simulating Leader Failure****************************\n") leaderServer := getLeader(servers) leaderServer.isAlive = false
-
Run the system and observe automatic leader election and client redirection
-
Monitor logs to see the Raft consensus algorithm in action
MIT License
