\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Controlled Dynamical Systems
Needed by:
Stochastic Dynamical Systems
Links:
Sheet PDF
Graph PDF

Dynamic Optimization Problems

Definition

Let $\mathcal{D} = ((\mathcal{X} _t)_{t = 0}^{T}), (\mathcal{U} _t)_{t=0}^{T-1}, (f_t)_{t=1}^{T-1})$ be a dynamical system. Let $g_t: \mathcal{X} _t \times \mathcal{U} _t \to \R \cup \set{\infty}$ for $t = 1$, $\dots $, $T-1$ and let $g_{T}: \mathcal{X} _T \to \R \cup \set{\infty}$. Let $x_0 \in \mathcal{X} _0$.

We call the sequence $(x_0, \mathcal{D} , (g_t)_{t = 1}^{T})$ a deterministic dynamic optimization problem. We call $x_0$ the initial state. We call $g_t$ the stage cost function for stage $t$ and call $g_T$ the terminal cost function.

A deterministic dynamic optimization problem corresponds to an optimization problem with variables $u_0 \in \mathcal{U} _0, \dots , u_{T-1} \in \mathcal{U} _{T-1}$. Define $U = \mathcal{U} _0 \times \mathcal{U} _1 \times \mathcal{U} _{T-1}$. Define $J: U \to \R \cup \set{\infty}$ by

\[ J(u) = \sum_{t = 0}^{T-1} g_t(x_t, u_t) + g_T(x_T) \]

in which $x_{t+1} = f_t(x_t, u_t)$ for $t = 0, \dots , T-1$. The optimization problem is $(U, J)$. And so a dynamic optimization problem is just a (possibly big) optimization problem. We call $\sum_{t = 0}^{T-1} g_t(x_t, u_t)$ the total stage cost and we call $g_T(x_T)$ the terminal stage cost.

Notation

We often write this problem as

\[ \begin{aligned} \text{minimize}\quad & \sum_{t = 1}^{T-1} g_t(x_t, u_t) + g_T(x_T) \\ \text{subject to}\quad & x_{t+1} = f_t(x_t, u_t), \quad t = 0, \dots , T-1. \end{aligned} \]

Other terminology and comments

Dynamic optimization problems are frequently called deterministic optimal control problems or classical or open-loop control problems. These problems are said to address the dynamic effect of actions across time. Although these models include no notion of “uncertainty” (or “uncertain outcomes”, see Uncertain Outcomes), they are frequently applied in situations with uncertain outcomes by ignoring the uncertainty in the application.

Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view