\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Normal Random Functions
Affine Transformations
Needed by:
Normal Random Function Regressors
Links:
Sheet PDF
Graph PDF

Normal Random Function Predictive Densities

Why

We use a normal random function model to make a regressor.

Definition

Let $F: \Omega \to (A \to \R )$ be a normal random function with mean function $m: A \to \R $ and covariance function $k: A \times A \to \R $ over the probability space $(\Omega , \mathcal{A} , \mathbfsf{P} )$. Let the family of random variables (or stochastic process) of $F$ be $f: A \to (\Omega \to \R )$.

Let $e$ be a normal random vector with mean zero and covariance $\Sigma _{e}$. Let $a^1, \dots , a^n \in A$. We sometimes call the sequence $a^1, \dots , a^n$ the design. Define $y: \Omega \to \R ^d$ by

\[ y_i = f(a^i) + e_i \]

We call $y$ the observation vector or observation random vector. We call $e$ the error vector or noise vector. In this context, $f(a^i)$ is sometimes called the signal.

Let $b^1, \dots , b^m \in A$. Define $z: \Omega \to \R ^d$ by $z_i = f(b^i)$ for $i = 1, \dots , n$. So $z_i$ is the random variable corresponding to the family at index $b^i \in A$. Then $(y, z)$ is normal. We call the conditional density of $z$ given $y$ the predictive density for $b$ given $a$.

Define $m_{a} \in \R ^{n}$ by $\transpose{(m(a^1),\cdots,m(a^n))}$ and define $m_{b}$ by $\transpose{(m(b^1), \cdots, m(b^m))}$.1 Define $\Sigma _a \in \R ^{n \times n}$ by

\[ \pmat{ k(a^1, a^1) & \cdots & k(a^1, a^n) \\ \vdots & \ddots & \vdots \\ k(a^n, a^1) & \cdots & k(a^n, a^n) \\ } \]

and define $\Sigma _{ba} \in \R ^{m \times n}$ by

\[ \pmat{ k(b^1, a^1) & \cdots & k(b^1, a^n) \\ \vdots & \ddots & \vdots \\ k(b^m, a^1) & \cdots & k(b^m, a^n) \\ }. \]

The predictive density $g_{z \mid y}(\cdot , \gamma ): \R ^m \to \R $ of $b \in A$ for design $a^1, \dots , a^n$ is normal with mean.

\[ m_b + K_{ba}\inv{(K_{a} + \Sigma _{e})}(\gamma - m_a) \]

and covariance

\[ \Sigma _{b} - \Sigma _{ba}\inv{(\Sigma _{a} + \Sigma _{e})}\Sigma _{ab}. \]


  1. Future editions will fix the re-use of the symbol $m$. ↩︎
Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view