Skip to content

[Bug]: Model Evaluation Metrics Display Incorrectly in Streamlit App #334

@Swapnilden

Description

@Swapnilden

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

In the Streamlit app for job satisfaction prediction, the evaluation metrics (accuracy, classification report, confusion matrix, ROC curve) are incorrectly displayed based on the prediction of a single input rather than using actual test data. This causes misleading results and incorrect model performance metrics. When a user inputs their data and submits the form, the app makes a prediction and then attempts to display evaluation metrics using this single input. This results in inaccurate accuracy scores, confusion matrix, and ROC curve because they are based on a single prediction rather than a comprehensive test set.

Add ScreenShots

N/A

What browsers are you seeing the problem on?

Firefox, Chrome, Safari, Microsoft Edge

Record

  • I agree to follow this project's Code of Conduct
  • I'm a GSSOC contributor
  • I want to work on this issue
  • I'm willing to provide further clarification or assistance if needed.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions