Skip to content

Collaboration95/Movie_Booking_Backend_Chubby-Raft

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chubby Movie Booking System

A Distributed, Fault-Tolerant Movie Theater Seat Reservation System

Course

50.041 - Distributed Systems

Technologies

  • Go Programming Language: Core system implementation
  • Raft Consensus Algorithm: Leader election and log replication
  • RPC (Remote Procedure Call): Client-server communication
  • Go Libraries:
    • net/rpc: RPC implementation
    • golang.org/x/exp: Extended Go packages

Overview

The Chubby Movie Booking System is a distributed, fault-tolerant seat reservation system that implements the Raft consensus algorithm. The system ensures data consistency across multiple server nodes and maintains availability even when server failures occur. It features automatic leader election, log replication, and client session management for reliable movie seat bookings.

Features

  • Distributed Architecture: Multi-server cluster with automatic failover capabilities
  • Raft Consensus Algorithm: Leader election and distributed log replication for consistency
  • Fault Tolerance: Automatic detection and recovery from server failures
  • Session Management: Client session tracking with heartbeat mechanisms
  • Seat Reservation: Real-time seat booking and cancellation with conflict resolution
  • Load Balancing: Automatic client redirection to current cluster leader
  • Concurrent Access: Thread-safe operations supporting multiple simultaneous clients

Skills Demonstrated

  • Distributed Systems Design: Implemented Raft consensus algorithm for maintaining consistency across distributed nodes
  • Concurrent Programming: Used Go's goroutines and channels for handling multiple client sessions and server communications
  • Network Programming: Built RPC-based client-server architecture with automatic failover and reconnection
  • Fault Tolerance: Designed leader election mechanisms and heartbeat systems for detecting and recovering from node failures
  • Data Persistence: Implemented log replication and file-based state management for durability
  • System Testing: Created mechanisms for simulating failures and testing system resilience

System Architecture

System Architecture

┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   Client 1  │    │   Client 2  │    │   Client N  │
└──────┬──────┘    └──────┬──────┘    └──────┬──────┘
       │                  │                  │
       └──────────────────┼──────────────────┘
                          │
              ┌───────────▼───────────┐
              │    Load Balancer      │
              │   (Leader Discovery)  │
              └───────────┬───────────┘
                          │
        ┌─────────────────┼─────────────────┐
        │                 │                 │
   ┌────▼────┐       ┌────▼────┐       ┌────▼────┐
   │Server 1 │◄─────►│Server 2 │◄─────►│Server 3 │
   │(Follower)│       │(Leader) │       │(Follower)│
   └─────────┘       └─────────┘       └─────────┘
        │                 │                 │
        └─────────────────┼─────────────────┘
                          │
                    ┌─────▼─────┐
                    │Shared Seat│
                    │   State   │
                    └───────────┘

System Components

Server (server/server.go)

  • Raft Implementation: Leader election, log replication, and consensus
  • Session Management: Client session tracking and heartbeat monitoring
  • Seat Management: Thread-safe seat reservation and cancellation
  • RPC Handler: Processing client requests and responses

Client (client/client.go)

  • RPC Client: Communication with server cluster
  • Session Management: Automatic reconnection and heartbeat sending
  • Command Interface: User-friendly booking interface

Common (common/common.go)

  • Data Structures: Request and Response message formats
  • Protocol Definitions: Client-server communication protocol

Testing and Scalability

  • Concurrent Client Testing: Multiple clients can simultaneously book seats
  • Failure Recovery Testing: System maintains consistency during server failures
  • Load Testing: Available in the Scalability_Testing branch
  • Edge Case Handling: Duplicate bookings, network partitions, and split-brain scenarios

Lessons Learned

Developing this distributed movie booking system provided deep insights into distributed systems challenges. Implementing the Raft consensus algorithm taught the importance of careful state management and timing in distributed environments. Handling network partitions and ensuring data consistency across multiple nodes required robust error handling and recovery mechanisms. The project highlighted the complexity of building fault-tolerant systems while maintaining performance and user experience.

Installation and Usage

Prerequisites

  • Go 1.23.2 or higher
  • Unix-based system (macOS/Linux recommended)

Setup

  1. Clone the repository:

    git clone [repo-url]
    cd Movie_Booking_Backend_Chubby-Raft
  2. Install dependencies:

    go mod tidy

Running the System

  1. Start the server cluster:

    cd server
    go run server.go

    This will start a 5-node Raft cluster with automatic leader election.

  2. Connect clients (in separate terminals):

    cd client
    go run client.go --clientID=client1
    go run client.go --clientID=client2

Using the Booking System

Available Commands:

  • LIST - View all available seats
  • [SeatID] RESERVE - Book a seat (e.g., 1A RESERVE)
  • [SeatID] CANCEL - Cancel a reservation (e.g., 1A CANCEL)

Example Session:

Enter your requests in the format 'SeatID RequestType' (e.g., '1A RESERVE')
> LIST
Available seats: 1A, 1B, 1C, 2A, 2B, 2C, 3A, 3B, 4A, 4B, 4C

> 1A RESERVE
[Client client1] Server response (SUCCESS): Seat 1A reserved successfully

> 1A CANCEL
[Client client1] Server response (SUCCESS): Seat 1A cancelled successfully

Testing Fault Tolerance

To simulate leader failure and test the system's resilience:

  1. Uncomment the leader failure simulation in server.go (lines 905-911):

    time.Sleep(20 * time.Second)
    // Start the leader election process
    // Simulate leader failure
    fmt.Printf("\n****************************Simulating Leader Failure****************************\n")
    leaderServer := getLeader(servers)
    leaderServer.isAlive = false
  2. Run the system and observe automatic leader election and client redirection

  3. Monitor logs to see the Raft consensus algorithm in action

License

MIT License

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5

Languages