\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Optimization Problems
Real Polyhedra
Inner Product Linear Functional Representations
Needed by:
Linear Optimization Solutions
Links:
Sheet PDF
Graph PDF

Linear Optimization Problems

Definition

An optimization problem $(\mathcal{X} , f)$ is called linear (a linear optimization problem) if $\mathcal{X} \subset \R ^n$ is a polyhedron and $f: \R ^n \to \Rbar$ is a linear function.

Problem data

Recall that $f$ is linear means there exists $c \in \R ^n$ such that

\[ f(x) = c^\top x \quad \text{for all } x \in \R ^n \]

Also, $\mathcal{X} $ polyhedral means there exists $A \in \R ^{m \times n}$ and $b \in \R ^{d}$ such that

\[ \mathcal{X} = \Set{x \in \R ^n}{Ax \leq b} \]

For this reason, the problem data $(A, b, c)$ is sufficient to specify a linear optimization problem. Recall that $Ax \leq b$ means element-wise inequality (i.e., that the inequality holds in each component).

Task

Given data $A \in \R ^{m \times n}$, $b \in \R ^n$, $c \in \R ^n$, we want to find $x \in \R ^d$ to

\[ \begin{aligned} \text{minimize} &\quad c^\top x \\ \text{subject to} &\quad Ax \leq b \end{aligned} \]

We either want $x^\star \in \R ^n$ so that $Ax^\star \leq b$ and $c^\top x^\star \leq c^\top x$ for all $x \in \R ^n$, or we want to know that $\Set{x}{Ax \leq b} = \varnothing$, or we want to know that for all $\alpha \in \R $, there is an $x \in \R ^n$ satisfying $Ax \leq b$ and $c^\top x \leq \alpha $. This problem is regularly called linear programming (a linear program). Many authors define this problem with the goal as maximization: of course, minimizing $c^\top x$ is equivalent to (has the same set of optimal solutions) maximizing $-c^\top x$.

Notation

In the context of linear optimization, $c^\top x$ is often abbreviated $cx$. As usual, $x \in \R ^n$ is called feasible (a feasible solution) if $Ax \leq b$. $x^\star \in \R ^n$ is called optimal (an optimal solution, optimum solution) if $c^\top x^\star \leq c^\top x$ for all $x \in \R ^n$. We sometimes denote the rows of $a$ by $\bar{a}_i^\top \in \R ^n$ for $i = 1, \dots , m$, i.e.,

\[ A = \bmat{- & \bar{a}_1^\top & - \\ &\vdots& \\ - & \bar{a}_m^\top & - } \in \R ^{m \times n} \]

and refer to the inequality $\bar{a}_i^\top x \leq b_i$ as an inequality constraint for $i = 1, \dots , m$. The set or the expression $Ax \leq b$ are both sometimes called the inequality constraints of the problem. For this reason, we can say in English that linear optimization deals with maximizing or minimizing a linear objective function of finitely many variables subject to finitely many linear inequalities.

The problem $(A, b, c)$ is infeasible if the polyhedron

\[ P = \Set{x \in \R ^n}{Ax \leq b} \]

is empty. In symbols, if $P = \varnothing$. The problem is unbounded if for all $\alpha \in \R $, there exists $x \in P$ with $c^\top x < \alpha $.

Here’s an infeasible instance. Define

\[ A = \bmat{1 \\ -1} \quad b = \bmat{-1 \\ -1} \quad c = \bmat{1} \]

We want $x \in \R ^1$ so that $x \leq -1$ and $x \geq 1$. There is no such $x$.

Here’s an unbounded instance: drop the second inequality constraint. Then we are interested in finding $x_1$ to minimize $x_1$ subject to $x_1 \leq -1$. Given $\alpha \in \R $, if $\alpha > 0$ pick $x_1 = -1$, else pick $2\alpha $. This problem is unbounded.

Here’s a simple example with an optimal solution. Modify $b = \bmat{1 & 0}^\top $. Now we want to find $x_1$ so that

\[ x_1 \leq 1 \quad \text{ and } x_1 \geq 0 \]

and we minimize $x_1$ Clearly $x_1 = 0$ is an optimal solution. Indeed, it is the unique optimal solution in this case.

Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view