Skip to content

UT ECE Senior Design project. A federated learning system for resource-constrained edge devices with model pruning capabilities. This project demonstrates distributed machine learning on Raspberry Pi devices while maintaining model accuracy through advanced sparsification techniques.

Notifications You must be signed in to change notification settings

evanbausbacher/federated-learning-showcase

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Collaborative Learning Under Resource Constraints

A federated learning system for resource-constrained edge devices with model pruning capabilities. This project demonstrates distributed machine learning on Raspberry Pi devices while maintaining model accuracy through advanced sparsification techniques.

Note: This is a project showcase. The full implementation remains private as it was developed as part of a university research collaboration.

Project Poster

Abstract

This project explores federated learning on resource-constrained edge devices, using Raspberry Pis as edge nodes. By combining federated learning with model pruning techniques, we achieved 200% speedup in inference and model transfer times with less than 5% accuracy loss. The system splits training across multiple devices to utilize the full training set while maintaining privacy and reducing communication costs.

Key Results

  • Inference Speedup: 200% improvement with pruned models
  • Model Transfer Speedup: 200% improvement in upload/download times
  • Accuracy Preservation: Less than 5% accuracy loss on heavily pruned models (90% sparsity)
  • Model Size Reduction: 5x reduction in model size with 90% sparsity

Architecture

Components

  • Embedded Devices: Raspberry Pi clients for distributed training
  • Server: Central aggregation server for federated learning coordination
  • GUI Application: User interface for model interaction and inference
  • Transfer Module: ONNX model conversion utilities

Technologies Used

  • Framework: PyTorch with Flower federated learning framework
  • Model: ResNet-18 trained on Fashion-MNIST dataset
  • Pruning: PyTorch structured and unstructured pruning
  • Inference Engine: DeepSparse for optimized sparse model inference
  • Hardware: Raspberry Pi 4 edge devices

Quick Start

Prerequisites

  • Python 3.7+
  • PyTorch
  • Flower federated learning framework

Launch GUI Application

python gui/gui_controller.py

Start Federated Training

  1. Server: Run the aggregation server
  2. Clients: Launch client training on Raspberry Pi devices
  3. Monitor: Use GUI to track training progress and run inference

Project Structure

├── embedded_devices/     # Raspberry Pi client implementation
├── gui/                 # Graphical user interface
├── example/             # Example federated learning setup
├── transfer/            # Model conversion utilities
└── base_model.pt        # Pre-trained baseline model

Research & Development

This project was developed as part of a UT ECE senior design capstone, supervised by Dr. Haris Vikalo. The work addresses critical challenges in distributed machine learning:

  • Resource Constraints: Limited compute and memory on edge devices
  • Communication Efficiency: High bandwidth costs for model synchronization
  • Privacy Preservation: Local training without data centralization
  • Model Optimization: Maintaining accuracy with compressed models

Team

Authors: Michelle Wen, Noah Rose, Melissa Yang, Jack Wang, Evan Bausbacher, Jordon Kashanchi
Faculty Sponsor: Dr. Haris Vikalo
Institution: University of Texas at Austin, Electrical and Computer Engineering
Date: November 2021

References

Built with components from:

License

This project is licensed under the terms specified in the LICENSE file.

About

UT ECE Senior Design project. A federated learning system for resource-constrained edge devices with model pruning capabilities. This project demonstrates distributed machine learning on Raspberry Pi devices while maintaining model accuracy through advanced sparsification techniques.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published