\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Dynamical Systems
State Representations
Directed Graphs
Needed by:
Dynamic Optimization Problems
Sheet PDF
Graph PDF

Controlled Dynamical Systems


We want to talk about influencing natural phenomena.1


Let $\mathcal{X} _0, \mathcal{X} _1, \dots , \mathcal{X} _{T}$ and $\mathcal{U} _0, \mathcal{U} _1, \dots , \mathcal{U} _{T-1}$ be sets. For $t = 0$, $\dots $, $T-1$, let $f_{t}: \mathcal{X} _t \times \mathcal{U} _t \to \mathcal{X} _{t+1}$. We call the triple

\[ ((\mathcal{X} _t)_{t = 0}^{T}), (\mathcal{U} _t)_{t=0}^{T-1}, (f_t)_{t=1}^{T-1}) \]

a controlled deterministic discrete-time dynamical system. We call the index $t$ the epoch (or stage, period).

Let $x_0 \in \mathcal{X} _0$. Let $u_0 \in \mathcal{U} _0, \dots , u_{T-1} \in \mathcal{U} _{T-1}$. Define a state sequence $x_1 \in \mathcal{X} _1, \dots , x_T \in \mathcal{X} _T$ by

\[ x_{t+1} = f_t(x_t, u_t). \]

In this case we call $x_0$ the initial state. We call the $x_t$ the states. We call the $u_t$ a sequence of inputs (or actions, decisions, choices, or controls). We call $f_t$ the transition function (or dynamics function).

We call $T$ the horizon. In the case that we have an infinite sequence of state sets, input sets, and dynamics, then we refer to a infinite-horizon dynamical system. To use language in contrast with this case, we refer to the dynamical system when $T$ is finite as a finite-horizon dynamical system.


The current action $u_t$ affects future states $x_{s}$ for $s > t$, but not the current or past states. The current state $x_t$ depends on the initial state $x_0$ and the sequence of past actions $u_0, \dots , u_{t-1}$. So the state is a “link” between the past and the future. Given $x_t$ and $u_t, \dots , u_{s-1}$, for $s > t$, we can compute $x_s$. In other words, the prior actions $u_0, \dots , u_{t-1}$ are not relevant.

This nonrelevancy of prior actions and prior states simplifies the sequential decision problem (see Sequential Decisions).


The dynamical system is called time-invariant if $\mathcal{X} _{t}$, $\mathcal{U} _t$ and $f_t$ do not depend on $t$. A simple variation is that $\mathcal{U} _t$ depends on $x_t$.2


Finite dynamical system

A dynamical system is finite if the state and action sets are finite. For example, $\mathcal{X} = \set{1, \dots , n}$ and $\mathcal{U} = \set{1, \dots , m}$. Then $f_t: \mathcal{X} \times \mathcal{U} \to \mathcal{U} $ is called a transition map.

Or else, let $(V, E)$ be a directed graph, then $\mathcal{X} = V$, $\mathcal{U} _{x_t} = \Set{(u, v) \in E}{u = x_t}$ and $f_t(x_t, u_t) = v$ when $u_t = (x_t, v)$ is a dynamical system. Roughly this system models “moving” on the graph.

Discrete-time linear dynamical system

Let $\mathcal{X} = \R ^n$ and $\mathcal{U} = \R ^m$. Define $f_t: \mathcal{X} \times \mathcal{U} \to \mathcal{X} $ by

\[ x_{t+1} = f_t(x_t, u_t) = A_t x_t + B_t u_t + c_t \]

for $y = 1, \dots , T-1$.3

  1. Future editions will modify, and may restore former editions language: “We want to talk about making decisions over time.” Though this language may also be used in a sheet on finite controlled dynamical systems. ↩︎
  2. Future editions will say more here. ↩︎
  3. This very special form of dynamics arises in many applications. Future editions will say more. ↩︎
Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view