by Michele Laurelli
Proportion of correct predictions out of total predictions made.
Accuracy = (TP + TN) / (TP + TN + FP + FN). Simple metric but misleading for imbalanced datasets.
Overall classification performance
Balanced dataset evaluation
Model comparison
A supervised learning task where the goal is to predict discrete class labels for input data.
Metrics for classification: Precision is correct positives / predicted positives; Recall is correct positives / actual positives.
A table used to evaluate classification model performance by showing true vs predicted classes.