Binary classification classifies samples as either 0 (negative) or 1(positive). Depending on which class/label the sample data belongs to and its predication result, each sample fits in a cell of the following table.

predicted | actual | positive | negative |

positive | true positive (TP) | false positive (FP) |

negative | false negative (FN) | true negative (TN) |

We can then calculates different metrics, which measures the classification results from different angles.

**True Positive Rate** (Sensitivity, hit rate, recall): Out of all the positive samples, what fraction is actually detected as positive.

TPR = TP / (TP + FN)

**True Negative Rate** (Specificity): Out of all the negative samples, what fraction is actually detected as negative.

TNR = TN / (TN + FP)

**Positive Predictive Value** (Precision): Out of all samples predicted as positive, what fraction is actually positive.

PPV=TP / (TP + FP)

**Negative Predictive Value**: Out of all samples predicted as negative, what fraction is actually negative.

NPV = TN / (TN + FN)

**False Positive Rate** (Fall out): Out of all negative samples, what fraction is detected as positive by mistake.

FPR = FP / (FP + TN) = 1 – TNR

**False Discovery Rate**: Out of all samples predicted as positive, what fraction is actually negative.

FDR = FP / (FP + TP) = 1 – PPV

**Accuracy**: Out of all samples, what fraction is predicted correctly. That is, positive samples are predicted as positive, negative samples are predicted as negative.

Accuracy = (TP + TN) / (P + N)

The table below gives a nice overall view of the metrics mentioned above,

predicted | actual |
positive | negative | |

positive |
true positive (TP) | false positive (FP) |
PPV = TP / (TP + FP) |

negative |
false negative (FN) | true negative (TN) |
NPV = TN / (FN + TN) |

TPR = TP / (TP + FN) | TNR = TN / (FP + TN) |
Accuracy = (TP + TN) / (P + N) |

**F1 score**: the harmonic mean of precision and recall.

F1 =2 * TPR*PPV / (TPR + PPV) = 2TP / (2TP + FP + FN)

**Matthews correlation coefficient**: It takes account true and false positives and negatives, and is regarded as balanced measure. It can be used for cases where the number of samples at different classes vary drastically.

MCC = (TP * TN – FP * FN) / √(TP + FP)(TP + FN)(TN + FP)(TN + FN)

MCC returns a value between -1 to 1, where -1 indicates total disagreement between prediction and actual facts, 0 means no better than random guess, and 1 indicates perfect prediction.

**References:**

- Matthews correlation coefficient: http://en.wikipedia.org/wiki/Matthews_correlation_coefficient
- Sensitivity and specificity: http://en.wikipedia.org/wiki/Sensitivity_and_specificity