CV Evaluation Metrics MCQ 15 Questions
Time: ~25 mins Intermediate

CV Evaluation Metrics MCQ

IoU for overlap, AP under PR curves for detection—pick metrics that match the task and failure costs.

Easy: 5 Q Medium: 6 Q Hard: 4 Q
IoU

Overlap

AP / mAP

PR curve

P & R

Tradeoff

Confusion

Matrix

Metrics that match the task

Classification uses accuracy, precision, recall, F1, and ROC-AUC. Object detection pairs predictions to ground truth with IoU thresholds, then averages precision across recall (AP) and classes (mAP). Segmentation reports mean IoU across classes; ranking tasks need NDCG. Always align metric with business cost (false positives vs misses).

Why mAP

Summarizes the quality of ranked detections across IoU cutoffs and recall—better than single-threshold accuracy.

Key ideas

IoU

Intersection over union for boxes or masks.

Precision / Recall

TP/(TP+FP) vs TP/(TP+FN).

AP

Area under precision-recall curve after sorting by score.

mAP

Mean AP over classes (and sometimes IoU thresholds).

Detection match

NMS → sort by score → greedy match to GT with IoU ≥ threshold

Pro tip: COCO mAP@[.5:.95] averages AP across IoU 0.50:0.05:0.95—stricter than AP@0.5 alone.