Skip to content

Commit 161b949

Browse files
author
codebasics
committed
imbalanced dataset
1 parent 1e1d784 commit 161b949

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

DeepLearningML/13_imbalanced/handling_imbalanced_data_exercise.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
#### Exercise: Handling imbalanced data in machine learning
22

3-
1. Use [this notebook]() but handle imbalanced data using simple logistic regression from skelarn library. The original notebook using neural network but you need to use sklearn logistic regression or any other classification model and improve the f1-score of minority class using,
3+
1. Use [this notebook](https://github.com/codebasics/py/blob/master/DeepLearningML/13_imbalanced/handling_imbalanced_data.ipynb) but handle imbalanced data using simple logistic regression from skelarn library. The original notebook using neural network but you need to use sklearn logistic regression or any other classification model and improve the f1-score of minority class using,
44
1. Undersampling
55
1. Oversampling: duplicate copy
66
1. OVersampling: SMOT
77
1. Ensemble
88

9-
[Solution]()
10-
11-
1. Take this dataset for bank customer churn prediction : https://www.kaggle.com/barelydedicated/bank-customer-churn-modeling
9+
[Solution](https://github.com/codebasics/py/blob/master/DeepLearningML/13_imbalanced/handling_imbalanced_data_exercise_solution_telecom_churn.ipynb)
10+
11+
2. Take this dataset for bank customer churn prediction : https://www.kaggle.com/barelydedicated/bank-customer-churn-modeling
1212
1. Build a deep learning model to predict churn rate at bank
1313
1. Once model is built, print classification report and analyze precision, recall and f1-score
1414
1. Improve f1 score in minority class using various techniques such as undersampling, oversampling, ensemble etc

0 commit comments

Comments
 (0)