-
-
Notifications
You must be signed in to change notification settings - Fork 368
Fix ROC-AUC for classifiers #475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… requirements path
Review Status: Needs InvestigationThis PR claims to fix ROC-AUC calculation by using
|
|
Closing as this issue has already been fixed in the current codebase. Current Implementation (lines 585-610 in Supervised.py)The ROC-AUC calculation already uses probabilities, not predictions: # Use predict_proba for ROC-AUC calculation instead of class labels
if hasattr(pipe, "predict_proba"):
y_pred_proba = pipe.predict_proba(X_test)
# For binary classification, use probabilities of positive class
if y_pred_proba.shape[1] == 2:
roc_auc = roc_auc_score(y_test, y_pred_proba[:, 1])
else:
# For multiclass, use one-vs-rest with probabilities
roc_auc = roc_auc_score(y_test, y_pred_proba, multi_class='ovr', average='weighted')
elif hasattr(pipe, "decision_function"):
# For models without predict_proba but with decision_function
y_pred_score = pipe.decision_function(X_test)
roc_auc = roc_auc_score(y_test, y_pred_score)
else:
# Fallback to class labels if neither method is available
roc_auc = roc_auc_score(y_test, y_pred)This handles:
ConclusionThe fix you proposed has already been implemented (or was implemented independently). The current code correctly uses Thank you for identifying this issue - it's already resolved in the current version! |
In the original package, ROC-AUC score is calculated using
y_preds instead ofy_scores. This pull request tries to resolve the issue for classifiers.