Skip to content

EduardoLemos567/MultiplayerCardGame

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multiplayer Implementation

1. Overview

The multiplayer system is built on a classic client-server architecture with an authoritative server. It does not use Godot's high-level multiplayer API (MultiplayerAPI, rpc). Instead, it employs a custom, low-level networking solution using TCPServer and StreamPeerTCP for full control over the communication protocol.

The server holds the master game state (ServerState) and is the sole authority on game rules and progression. Clients maintain a local copy of the state (ClientState) and send action requests to the server (e.g., CL_PLAY_CARD). The server validates these actions, updates its state, and then broadcasts the changes to all clients.

2. Core Concepts & Design Choices

Several key design choices define this implementation:

a. Low-Level TCP Networking

  • Technology: The system uses Godot's TCPServer, StreamPeerTCP, and PacketPeerStream to manage connections and data transfer.
  • Rationale: This approach was chosen over Godot's built-in high-level multiplayer for maximum control. It allows for a completely custom-defined packet structure and serialization process, enabling optimizations like data compression (e.g., sending arrays instead of dictionaries) and fine-grained control over what data is sent to whom. The trade-off is increased complexity in implementation and maintenance.

b. State Synchronization Model

The system uses a hybrid approach for keeping clients synchronized with the server:

  1. Full State Synchronization: Upon game start, or when a client requests it (CL_RENEW_STATE), the server sends the entire game state in a single SV_NEW_STATE packet. This ensures a client has a correct and complete view of the game.
  2. Event-Based Updates: For most in-game actions (a player connecting, saying a message, etc.), the server sends small, specific packets (e.g., SV_PLAYER_CONNECTED, SV_PLAYER_SAID_MSG). This is far more efficient than sending the entire state for every minor change.

c. Schema-Driven Packet Protocol

This is the cornerstone of the networking layer. The system in resources/features/multiplayer/common/packets/ provides a robust, type-safe, and extensible way to define packets.

  • Packet: The main class representing a network packet. It handles the encoding and decoding of data.
  • PacketSchema: A static class that defines the structure (the fields and their order) for every packet type (Packet.Type). This acts as a single source of truth for the protocol definition.
  • Fields: A collection of classes (IdField, PlainField, PlayerEntityField, StateField, etc.) that handle the serialization and deserialization logic for specific data types. This design isolates the logic for handling an int vs. a complex PlayerEntity.
  • Contextual Serialization: The field system supports sending different data depending on the recipient. For example, CardEntityField will hide a card's identity from a player who shouldn't see it (e.g., a card in the opponent's hand), sending Constants.UNKNOWN_ID instead.

d. Separation of Concerns

The code is cleanly structured, separating different responsibilities into distinct classes for both the client and server sides.

  • *Node (ServerNode, ClientNode): Manages the raw network connection lifecycle (listening, connecting, accepting peers).
  • *Messenger (ServerMessenger, ClientMessenger): Acts as a bridge between the raw network layer and the game logic. It processes incoming packets and emits typed signals (e.g., player_said_msg).
  • *Logic (ServerLogic, ClientLogic): Contains the core game logic. It subscribes to signals from the Messenger and orchestrates game flow and state changes.
  • *State (ServerState, ClientState): Pure data classes that hold the current state of the game (players, cards, piles, etc.).

3. Key Classes

  • MultiplayerManager (Singleton): The global entry point for all multiplayer functionality. It holds references to the active ServerNode and/or ClientNode.

  • ServerNode / ClientNode: These classes are the main drivers for the server and client, respectively. They are responsible for initialization, processing network events each frame, and shutting down connections.

  • ServerLogic: The "brains" of the server. It handles player identification, starts the game, validates client actions, and dictates the flow of the game.

  • ClientLogic: Reacts to messages from the server. When it receives a new state (_on_new_state), it updates the local ClientState. It also contains simple client-side logic, like sending a "hello" message when the game starts.

  • BaseState / ServerState / ClientState: These classes model the game's data. BaseState provides the common structure, including IdentificationManagers which are dictionaries that map entity IDs to entity objects. ServerState includes logic to initialize the game board, while ClientState is a simpler reflection of the server's state.

  • BasePeer / ClientPeer / ServerPeer: These classes encapsulate a connection.

    • ClientPeer: Exists on the server and represents a connected client.
    • ServerPeer: Exists on the client and represents the connection to the server.

4. Connection & Game Start Flow

  1. A player hosts a game, creating a ServerNode which starts a TCPServer to listen for connections.
  2. Another player joins, creating a ClientNode which connects to the server's address.
  3. Upon successful connection, the ClientNode immediately sends a CL_IDENTIFICATION packet containing the player's unique ID and profile data.
  4. The ServerNode accepts the connection, creating a ClientPeer to manage it.
  5. The ServerLogic receives the CL_IDENTIFICATION packet. It identifies the ClientPeer, associates it with a PlayerEntity, and stores it. It then broadcasts a SV_PLAYER_CONNECTED message to other connected clients.
  6. Once the required number of players have connected and identified themselves, ServerLogic triggers the game start.
  7. ServerState initializes the full game state (shuffling decks, placing initial cards, etc.).
  8. The server sends a SV_NEW_STATE packet containing this initial state to all clients.
  9. The server then sends a SV_GAME_STARTED packet to all clients.
  10. Clients receive SV_NEW_STATE, load it into their local ClientState, and perform local setup (like player.claim_entities).
  11. Upon receiving SV_GAME_STARTED, the UI can be enabled, and the game begins.

5. Architectural Constraint: Decoupled Server

The multiplayer architecture is guided by a primary constraint: the server logic must be fully decoupled from the client. This design choice provides significant flexibility for deployment and future development. The custom packet protocol was specifically built to support several server models:

  • Integrated Server: A player can host a game directly from their client. In this "Listen Server" model, the server logic runs within the same Godot instance as the hosting client. This is the current implementation for quick peer-to-peer matches.
  • Dedicated Server: The server can be compiled as a standalone, headless Godot executable. This is ideal for authoritative, persistent game instances that run independently of any client.
  • Alternative Backend: Because the communication protocol serializes to a simple dictionary or array format, the server logic could be reimplemented on a different technology stack (e.g., a Node.js or Python web server). The client would require minimal changes, as it would still be sending and receiving the same packet structures, potentially serialized as JSON over a different transport layer like WebSockets.

About

Multiplayer implementation for a card game.

Resources

Stars

Watchers

Forks

Contributors