by Michele Laurelli
Metrics for classification: Precision is correct positives / predicted positives; Recall is correct positives / actual positives.
Precision measures accuracy of positive predictions (avoiding false positives). Recall measures completeness (avoiding false negatives). The F1-score combines both into a single metric.
Medical diagnosis (high recall)
Spam detection (high precision)
Information retrieval
A table used to evaluate classification model performance by showing true vs predicted classes.
The harmonic mean of precision and recall, providing a single balanced metric.
A supervised learning task where the goal is to predict discrete class labels for input data.