luvv to helpDiscover the Best Free Online Tools
Topic 9 of 10

Metrics For Classification

Learn Metrics For Classification for free with explanations, exercises, and a quick test (for Data Scientist).

Published: January 1, 2026 | Updated: January 1, 2026

Why this matters

As a Data Scientist, you will regularly compare models, choose decision thresholds, and justify trade-offs to product, risk, or clinical teams. Using the right classification metrics is how you avoid launch bugs like a "95% accurate" model that fails on the minority class your users care about.

  • Fraud/abuse: prioritize recall to catch more bad actors; watch precision to control manual review load.
  • Medical screening: minimize false negatives; communicate sensitivity (recall) and specificity.
  • Customer alerts: balance precision vs recall to keep trust and avoid alert fatigue.
  • Monitoring: track both threshold-free (AUC, log loss) and thresholded metrics (F1, precision/recall) for drift.

Concept explained simply

Think of a binary classifier as a gate that lets "positive" items through.

  • True Positive (TP): positive and correctly let through.
  • False Positive (FP): negative but mistakenly let through.
  • False Negative (FN): positive but held back.
  • True Negative (TN): negative and correctly held back.

From these come the core metrics:

  • Accuracy = (TP + TN) / (TP + FP + FN + TN)
  • Precision = TP / (TP + FP) — "When we say yes, how often are we right?"
  • Recall (Sensitivity, TPR) = TP / (TP + FN) — "How many actual positives did we catch?"
  • Specificity (TNR) = TN / (TN + FP) — "How many actual negatives did we correctly reject?"
  • F1 = 2 × Precision × Recall / (Precision + Recall) — harmonic mean; punishes imbalance between precision and recall.
  • Balanced Accuracy = (Recall + Specificity) / 2 — good for imbalance.
  • MCC (Matthews Corr. Coef.) — single score using all 4 confusion terms; robust to imbalance.
  • ROC-AUC — probability the model ranks a random positive above a random negative; threshold-independent.
  • PR-AUC — quality of ranking for the positive class; more informative under heavy class imbalance.
  • Log Loss (Cross-Entropy) — punishes overconfident wrong probabilities; measures calibration.
Key mental model

Two layers:

  1. Ranking layer (scores/probabilities): judged by ROC-AUC, PR-AUC, log loss.
  2. Decision layer (threshold at 0.5 or tuned): judged by precision, recall, F1, accuracy, specificity, MCC.

Select the model by ranking metrics; set the operating threshold by business costs using precision/recall trade-offs.

When to use which metric

  • Class imbalance is mild and costs are symmetric: accuracy, F1, ROC-AUC are fine.
  • Positive class is rare: use PR-AUC, recall, precision, F1, MCC, balanced accuracy.
  • High cost of false negatives (e.g., disease): emphasize recall (sensitivity), track specificity.
  • High cost of false positives (e.g., manual review is expensive): emphasize precision/specificity.
  • You care about probability quality (for downstream decisions): log loss and calibration curves.

Worked examples

Example 1 — Compute core metrics from a confusion matrix

Confusion matrix: TP=40, FP=10, FN=20, TN=30. Total=100.

  • Accuracy = (40+30)/100 = 0.70
  • Precision = 40/(40+10) = 0.80
  • Recall = 40/(40+20) ≈ 0.67
  • F1 ≈ 2×0.80×0.67/(0.80+0.67) ≈ 0.73
  • Specificity = 30/(30+10) = 0.75
  • Balanced Accuracy = (0.67+0.75)/2 ≈ 0.71
  • MCC ≈ (TP×TN − FP×FN) / √((TP+FP)(TP+FN)(TN+FP)(TN+FN)) ≈ 0.41

Takeaway: Accuracy 0.70 hides that recall is only ~0.67.

Example 2 — Imbalance trap

Dataset: 1,000 samples, 50 positives (5%).

Model A at a threshold: TP=25, FP=50, FN=25, TN=900.

  • Accuracy = (25+900)/1000 = 0.925
  • Precision = 25/(25+50) = 0.33
  • Recall = 25/50 = 0.50
  • F1 ≈ 0.40

Dummy "all negative" model: Accuracy = 950/1000 = 0.95 (higher!), but Recall = 0. This shows accuracy alone can be misleading with rare positives.

Example 3 — Threshold by cost

Costs: FP = 1, FN = 10.

Six cases with (p, y): (0.9,1), (0.8,0), (0.7,1), (0.6,0), (0.4,1), (0.3,0)

  • Threshold 0.5 ⇒ TP=2, FP=2, FN=1, TN=1. Cost = 2×1 + 1×10 = 12
  • Threshold 0.8 ⇒ TP=1, FP=1, FN=2, TN=2. Cost = 1×1 + 2×10 = 21

Choose threshold 0.5 given these costs. This is how you align metrics with business impact.

Hands-on: exercises

Do these before the quick test. Tip: write down TP, FP, FN, TN clearly.

Exercise 1 — Compute metrics

See the exercise block below for inputs and submit your answers. Then open the solution to self-check.

Exercise 2 — Choose a threshold

Estimate the better threshold based on precision/recall and costs. Open the solution only after you commit to an answer.

  • [ ] I computed at least precision, recall, F1, specificity.
  • [ ] I compared accuracy vs F1 on an imbalanced example.
  • [ ] I practiced choosing a threshold using costs.
  • [ ] I can explain when PR-AUC is preferred over ROC-AUC.

Common mistakes and how to self-check

  • Mistake: Using accuracy with rare positives. Self-check: What is the positive rate? If <10%, prefer PR-AUC/F1/recall.
  • Mistake: Reporting ROC-AUC only, then deploying at 0.5 threshold blindly. Self-check: Does your chosen threshold meet precision/recall targets?
  • Mistake: Ignoring prevalence drift. Self-check: Track calibration and PR metrics over time, not just ROC-AUC.
  • Mistake: Chasing F1 when costs are asymmetric. Self-check: Compute cost-weighted metrics or select threshold by expected cost.
  • Mistake: Overconfident probabilities. Self-check: Review log loss; apply calibration (e.g., Platt/Isotonic) if needed.

Practical projects

  • Build a fraud detector on an imbalanced dataset. Compare baselines by ROC-AUC and PR-AUC; choose threshold to keep manual reviews under a fixed budget while maximizing recall.
  • Medical screening simulation. Optimize recall to 95% minimum, then maximize precision; report specificity, balanced accuracy, and MCC.
  • Alerting system calibration. Optimize log loss; evaluate calibration by grouping predictions into bins and comparing predicted vs observed rates.

Learning path

  1. Master confusion matrix and core metrics (this page).
  2. Threshold tuning by business costs (next subskills).
  3. Model selection by ROC-AUC, PR-AUC, and log loss.
  4. Calibration and monitoring in production.

Who this is for

Junior to mid-level Data Scientists, ML Engineers, and analysts who build or evaluate binary/multi-class classifiers and must communicate trade-offs clearly.

Prerequisites

  • Basic probability and ratios.
  • Understanding of binary classification outputs (scores or probabilities).
  • Comfort with simple arithmetic.

Mini challenge

You have an email spam model. Current threshold yields Precision=0.92, Recall=0.60. Support requests show missed spam is hurting trust more than occasional false alarms.

  • What direction should you move the threshold? (Up or down?)
  • Which metric will you monitor primarily while adjusting?
Suggested approach

Lower the threshold to raise recall. Monitor recall and F1, and ensure precision remains acceptable.

Next steps

  • Repeat the exercises with your own confusion matrices.
  • Pick a live model and evaluate PR-AUC vs ROC-AUC.
  • Prepare a one-page decision memo: required recall/precision, chosen threshold, and monitoring plan.

Quick Test

Take the short test below to check understanding. Everyone can take it for free; logged-in users will have their progress saved automatically.

Practice Exercises

2 exercises to complete

Instructions

Given a binary classifier confusion matrix:

  • TP = 48
  • FP = 12
  • FN = 24
  • TN = 116

Tasks:

  1. Compute Accuracy, Precision, Recall, F1, Specificity, Balanced Accuracy.
  2. Compute MCC (Matthews Correlation Coefficient).
  3. In one sentence: would you deploy if the positive class is rare and missing positives is costly?
Expected Output
A list of metric values (to 2 decimal places) and a one-sentence decision justification.

Metrics For Classification — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Metrics For Classification?

AI Assistant

Ask questions about this tool