Skip to content

mohammad2012191/Fairness-in-Machine-Learning-Identifying-and-Mitigation-of-Bias

Repository files navigation

Fairness in Machine Learning: Identifying and Mitigation of Bias

This is the output of our research work in the CAREERS Cyberteam Program. This project was part of a National Science Foundation (NSF) funded initiative through Penn State's Institute of Computational and Data Sciences (ICDS), hosted by Yale's Careers CyberTeam. Our project entitled "Fairness in Machine Learning: Identifying and Mitigation of Bias". In this work we used three fairness libraries to identify and mitigate the bias in machine learning models. We used Fairlearn by Microsoft, AIF360 by IBM, and What-If-Tool by google. We identified and mitigated the bias for the two different types of data: structured data in classification problem, and unstructured data in computer vision and natural language processing problems.

15 May 2024

Contributers
Karim Kallich
Mohamed Eltayeb
Ahmed Rashed

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors