\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Eigenvalues and Eigenvectors
Matrix Transpose
Orthogonal Matrices
Needed by:
Circulant Matrix Eigendecompositions
Eigenvalues and Definiteness
Normal Matrices
Links:
Sheet PDF
Graph PDF

Eigenvalue Decomposition

Why

We discuss a decomposition using eigenvalues and eigenvectors.1

Defining result

An eigenvalue decomposition of a matrix $A \in \R ^{n \times n}$ is an ordered pair $(X, \Lambda )$ in which $X$ is invertible, $\Lambda $ is diagonal, and $A = X\Lambda X^{-1}$.

In this case, $AX = X\Lambda $, in other words,

\[ \bmat{ && \\ & A & \\ && } \bmat{x_1 & \cdots & x_m} = \bmat{x_1 & \cdots & x_m} \bmat{\lambda _1 && \\ & \ddots & \\&& \lambda _n}. \]

in which $x_i$ is the $i$th column of $X$ and $\lambda _i$ is the $i$th diagonal element of $\Lambda $. We have $Ax_i = \lambda _ix_i$ for $i = 1, \dots , n$. In other words, the $i$th column of $X$ is an eigenvector of $A$ and the $j$th entry of $\Lambda $ is the corresponding eigenvalue.

If $X$ is orthonormal, so that $X^{-1} = X^\top $, then we can interpret such a decomposition as a change of basis to eigenvector coordinates. If $Ax = b$, and $A = X\Lambda X^{-1}$ then $(X^{-1}b) = \Lambda (X^{-1}x)$. Here, $X^{-1}x$ expands $x$ is the basis of columns of $X$. So to compute $Ax$ , we first expand into the basis of columns of $X$, scale by $\Lambda $, and then interpret the result as the coefficients of a linear combination of the columns of $X$.

In this case that $A = X\Lambda X^\top $ for an eigenvalue decomposition $(X, \Lambda )$ of $A$, we can also write

\[ A = X\Lambda X^\top = \sum_{i = 1}^{n} \Lambda _{ii}x_ix_i^\top . \]

Every real symmetric matrix has an eigenvalue decomposition $(X, \Lambda )$ in which $X$ is orthonormal.2

  1. Future editions will expand. ↩︎
  2. In future editions, this may be the motivating result for the definition of eigenvalues. ↩︎
Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view