Is a sufficient condition for mutual independence of a set of events the fact that each pair of events of the set are independent? The surprising answer is no.
Suppose $P$ is a event probability function on
a finite sample space $\Omega $.
The events $A_1, \dots , A_n$ are
independent (or
mutually independent), if
for all indices $i \neq j$.
\[
P(A_{i} \cap A_{j})
=
P(A_{i}) P(A_{j})
\]
As usual, model tossing a coin twice with the
sample space $\Omega = \set{0,1}^2$.
Put a distribution $p: \Omega \to [0,1]$ so
that $p(\omega ) = 1/4$ for all $\omega \in
\Omega $.
Define $A = \set{(1,0),(1,1)}$ (the first toss
is heads) , $B = \set{(0,1),(1,1)}$ (the second
toss is heads), and $C = \set{(0,0),(1,1)}$.
Then $P(A) = P(B) = P(C) = 1/2$, and
\[
P(A \cap B) = P(B \cap C) = P(A \cap C) = 1/4.
\] \[
P(A \cap B \cap C) = 1/4 \neq P(A)P(B)P(C).
\]