Explore tens of thousands of sets crafted by our community.
Evaluation Metrics for Classification
10
Flashcards
0/10
Accuracy
The ratio of correctly predicted observations to the total observations. It is calculated as
PR Curve (Precision-Recall Curve)
A graph showing the trade-off between precision and recall for different thresholds. A high area under the curve represents both high recall and high precision.
Log-Loss (Cross-Entropy Loss)
A performance metric that measures the penalty for incorrect predictions, where the penalty is logarithmically proportionate to the inverse of the predicted probability for the actual class. Calculated as
Precision
The ratio of correctly predicted positive observations to the total predicted positive observations. Calculated as
Confusion Matrix
A table used to describe the performance of a classification model, showing the actual and predicted classifications. It helps to visualize true positives, false positives, true negatives, and false negatives.
Recall
The ratio of correctly predicted positive observations to all observations in the actual class - true positive rate. Calculated as
Receiver Operating Characteristic (ROC) Curve
A graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The curve is generated by plotting the True Positive Rate (Recall) against the False Positive Rate at various threshold settings.
Area Under the Receiver Operating Characteristic Curve (AUC-ROC)
The area under the ROC curve. It provides an aggregate measure of performance across all possible classification thresholds. The value ranges from 0 to 1, where 1 implies a perfect model and 0.5 denotes a model with no discriminative ability.
F1 Score
The weighted average of Precision and Recall. Calculated as
Matthews Correlation Coefficient (MCC)
A coefficient that measures the quality of binary classifications, which generates a value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 is no better than random prediction, and -1 indicates total disagreement between prediction and observation. Calculated by the formula
© Hypatia.Tech. 2024 All rights reserved.