We can summarize the (label, prediction) pairs for a particular classifier on a particular dataset in a matrix.
Let $A$ be a nonempty set and $B = \set{-1,
1}$.
For a dataset $(a^1, b^i), \dots , (a^n, b^n)$
in $A \times B$, and classifier $G: A \to B$,
the confusion matrix $C$
is defined
\[
C = \bmat{
\text{\# true negatives} & \text{\# false negatives} \\
\text{\# false positives} & \text{\# true positives}
} = \bmat{
C_{\text{tn}} & C_{\text{fn}} \\
C_{\text{fp}} & C_{\text{tp}}
}.
\]
The diagonal elements of the confusion matrix give the numbers of correct predictions. The off-diagonal entries give the numbers of incorrect predictions for the two types of errors (see \sheetref{classifier_errors}{Classifier Errors}).
In this notation, the false positive rate is $C_\text{fp}/n$, the false negative rate is $C_{\text{fn}}/n$ and the error rate is the sum of these, $(C_{\text{fn}} + C_{\text{fp}})/n$.
The true positive rate is $C_{\text{tp}} / (C_{\text{fn}} + C_{\text{tp}})$. The true negative rate is $C_{\text{tn}} / (C_{\text{tn}} + C_{\text{fp}})$. The false alarm rate is $C_{\text{fp}} / (C_{\text{tn}} + C_{\text{fp}})$. The precision is $C_{\text{tp}}/C_{\text{tp}} + C_{\text{fp}}$