\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Outcome Probabilities
Real Vectors
Real Inner Product
Outcome Variable Expectation
Needed by:
None.
Links:
Sheet PDF
Graph PDF

Probability Vectors

Why

We can identify probability distributions with vectors.

Definition

Let $p: \Omega \to \R $ be a probability distribution on a finite set $\Omega = \set{\omega _1, \dots , \omega _n}$. We can associate $p$ with the vector $x \in \R ^n$ defined by $x_i = p(\omega _i)$ for $i = 1, \dots , n$. We call this vector $y$ the probability vector associated with $p$. The conditions on $p$ mean that (1) $\ip{1, y} = 1$ and (2) $y \geq 0$ (i.e., $y_i \geq 0$ for all $i = 1, \dots , n$.).

Conversely, suppose $z \in \R $ satifies (1) and (2). Then $q: \Omega \to \R $ defined by $q(\omega _i) = z_i$ for $i = 1, \dots , n$ is a probability distribution. For this reason, we call $z$ satisfying the conditions a distribution vector. Notice that implicit in this correspondence is a numbering $\omega : \set{1, \dots , n} \to \Omega $ of the set of outcomes $\Omega $.

Expectation

Suppose $\rho \in \R ^n$ is a distribution vector corresponding to $p: \Omega \to \R $ its corresponding distribution. Let $x: \Omega \to \R $ and (similar to $\rho $) define $\xi \in \R ^n$ by $\xi _i = x(\omega _i)$ for $i = 1, \dots , n$. Then $\E x = \ip{\rho , \xi }$.

Example

For $\rho = (-1, -1, 1, 1, 2)$ and $\xi = (0.1, 0.15, 0.1, 0.25, 0.4)$

\[ \E x = \ip{\rho , \xi } -1 -0.15 + 0.1 + 0.25 + 2(0.4) = 0.9. \]

Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view