Production-ready federated learning platform with privacy-preserving distributed training, communication efficiency, and Byzantine robustness.
Algorithms
- Hybrid gradient compression (20-50x ratio)
- Byzantine-robust aggregation (Multi-Krum, Trimmed Mean)
- Differential privacy (DP-SGD)
- Membership inference attack validation
Infrastructure
- Kubernetes deployment with Helm
- CI/CD pipeline with GitHub Actions
- Prometheus + Grafana monitoring
- MLflow experiment tracking
./launch-platform.shAccess:
- Dashboard: http://localhost:8050
- MLflow: http://localhost:5000
complete/fl/
├── fl/
│ ├── task.py # Training loop
│ ├── server_app.py # Server aggregation
│ ├── client_app.py # Client training
│ ├── compression.py # Gradient compression
│ ├── robust_aggregation.py # Byzantine robustness
│ └── privacy/ # Privacy validation
├── config/
│ └── default.yaml # Configuration
└── tests/
└── test_*.py # Test suite
Edit complete/fl/config/default.yaml:
topology:
num_clients: 10
fraction: 0.5
train:
lr: 0.01
local_epochs: 1
num_server_rounds: 10
data:
dataset: "albertvillanova/medmnist-v2"
subset: "pneumoniamnist"
batch_size: 32
privacy:
dp_sgd:
enabled: true
noise_multiplier: 0.8
target_epsilon: 3.0Local setup:
cd complete/fl
pip install -e ".[dev]"
pytest tests/ -v --cov=fl
flwr run . local-simulation --streamDocker:
./launch-platform.sh
docker compose -f complete/compose-with-ui.yml down- Architecture - System design
- API Reference - Module documentation
- Kubernetes Deployment - Production deployment
- Troubleshooting - Common issues
cd complete/fl
pytest tests/ -v --cov=fl --cov-report=htmlMIT