\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Rooted Trees
Random Real Vectors
Covariance Matrix
Real Matrix-Matrix Products
Needed by:
None.
Links:
Sheet PDF
Graph PDF

Rooted Tree Linear Cascades

Why

It is natural to look for a class of structural equation models with favorable identifiability and properties.

Definition

A $d$-dimensional rooted tree linear cascade is a sequence of four objects: a tree on $\set{1, \dots , d}$, a vertex of the tree, a family of real numbers indexed by the edges of the tree, and a $d$-dimensional random vector whose covariance matrix is the identity matrix. The cascade is called “$d$-dimensional” because we associate it with a random vector (defined as a function of that in the form of its definition) whose codomain is $\R ^d$.

The tree together with the vertex form a rooted tree. The graph associated with the rooted tree and the family of real numbers together form a weighted graph.

The idea is to use the weights and the tree structure to recursively define a random vector in terms of elements of the given random vector. Let $C = (T, i, w, e)$ be a $d$-dimensional rooted tree linear cascade. So $T$ is a tree on $\set{1, \dots , d}$, $i \in \set{1, \dots , d}$ and $w: T \to \R $, and $e: \Omega \to \R ^d$ for some probability space $(A, \mathcal{A} , \mathbfsf{P} )$. The random vector associated with $C$ is the random variable $x: \Omega \to \R ^d$ defined by

\[ x_i = e_i \text{ and } x_j = w_{\set{\pa{j}, j}}x_{\pa{j}} + e_{j} \text{ for } j \neq i. \]

In other words,

\[ e = Ax \]

where $A$ is lower triangle and extremely sparse.1

Notation

Let $(A, \mathcal{A} , \mathbfsf{P} )$ be a probability space. Let $e: A \to \R ^d$ be a random vector, let $T$ be a tree on $\set{1, \dots , d}$ with $a_{ij} = a_{ji}$ the weight on edge $\set{i, j} \in T$.


  1. Future editions will clarify the meaning of the term sparse. ↩︎
Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view