Skip to content

lcmami/adaptive-client-mixing-fl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adaptive Client Mixing for Federated Learning

A research-driven machine learning project derived from my master's thesis, focusing on adaptive client selection strategies in multi-server federated learning under mobility-induced unavailability.

⚠️ Note
This repository contains a simplified and partially adapted implementation.
The full thesis is currently under embargo for potential publication and is not publicly available.


🔍 Overview

In real-world federated learning systems, client participation is often unstable due to mobility, network conditions, or availability constraints.

Such dynamics introduce:

  • Non-stationary client populations
  • Shifting data distributions
  • Unstable training behavior
  • Degraded model performance

This project investigates how adaptive client mixing strategies can improve training robustness under these conditions.


🎯 Problem Setting

Traditional federated learning assumes relatively stable client participation.

However, in multi-server environments:

  • Clients may move across servers
  • Participation may be intermittent
  • Effective data distribution changes over time

This leads to:

❗ Fixed client selection strategies becoming suboptimal or unstable


💡 Core Idea

We model client composition as a controllable variable and optimize it dynamically.

Key Design

  1. Client Partitioning

    • Local Clients → stable participants
    • Visitor Clients → dynamic / unstable participants
  2. Mobility Modeling

    • Represented as mobility-induced unavailability
    • Modeled as an environmental prior
  3. Decision Variable

    • p = proportion of visitor clients
  4. Optimization Strategy

    • Bayesian Optimization (BO)
    • Treated as a noisy black-box optimization problem

🧠 System Pipeline

Mobility Indicators

Unavailability Proxy

Client Pool (Local / Visitor)

Mixing Ratio p

Short-run Evaluation Signal

Bayesian Optimization

Optimal Ratio p*

Federated Training Strategy


🧪 Experimental Insight (Thesis-derived)

Although this repository provides a simplified implementation, the underlying research observes:

  • Optimal mixing ratio depends on environment conditions
  • Higher visitor ratios improve generalization in stable settings
  • Lower visitor ratios improve stability under high unavailability
  • Short-run signals can approximate long-run training performance

These findings are derived from controlled experiments in the original thesis.


📊 Results / Key Findings

The optimal mixing ratio varies under different client unavailability conditions.

  • Low unavailability (10%): Higher visitor ratio performs better
  • Medium unavailability (50%): Moderate behavior
  • High unavailability (90%): Lower visitor ratio is more stable

Mixing Ratio vs Unavailability


BO-style Convergence (Simplified)

The demo also illustrates how the optimal mixing ratio is identified through an iterative search process.

Low

Low

Medium

Medium

High

High


🛠 Implementation Overview

This repository focuses on the decision-making layer rather than full FL infrastructure.

Core Components

  • src/simulator.py
    Simulates environment-dependent behavior

  • src/objective.py
    Defines reward using short-run signals

  • src/optimizer.py
    Implements simplified Bayesian Optimization loop


▶️ Demo

Run the demo:

python demo/run_demo.py

This demonstrates:

  • Ratio adaptation under different environments
  • BO-based decision process

📁 Repository Structure

demo/ → runnable demo
src/ → core modules
docs/ → method explanation
figures/ → diagrams / results


⚙️ Reproducibility

  • Python 3.10+
  • Designed for conceptual reproducibility
  • Not intended to fully replicate large-scale FL

📌 Project Positioning

This project demonstrates:

  • Federated learning system modeling
  • Optimization under uncertainty
  • Simulation-based evaluation
  • Research-to-engineering translation

🚧 Limitations

  • No full FL training pipeline (e.g., Flower)
  • Uses simplified simulation
  • Focuses on decision layer

📚 Documentation

See:

docs/method.md


🧾 Research Note

This work is based on my master's thesis on:

Adaptive client mixing in multi-server federated learning using Bayesian Optimization

The full thesis is currently under embargo and will be publicly available in the future.


👤 Author

Machine Learning Engineer (Entry-level)
Focus: Federated Learning / Optimization / AI Systems

Keywords: Federated Learning, Bayesian Optimization, Non-IID, Distributed Systems

About

Adaptive client mixing in federated learning using Bayesian optimization (thesis-aligned simulation)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages