Slide 24
Slide 24 text
Evaluating
● Split train / test set. Should not overlap!
● Accuracy
○ What % of samples did it get right?
● Precision / Recall
○ True Positives, True Negatives, False Positives, False Negatives
○ Precision = TP / (TP + FP) (out of all the classifier labeled positive, % that actually was)
○ Recall = TP / (TP + FN) (out of all the positive, how many did it get right?)
○ F-measure (harmonic mean, 2 * Precision * Recall / (Precision + Recall))
● Confusion matrix
● Many others
24