We are frequently interested in finding minimizers of real functions.1
An optimization problem (or extremum problem) is a pair $(\mathcal{X} , f)$ in which $\mathcal{X} $ is a nonempty set called the constraint set and $f: \mathcal{X} \to \R $ is called the objective (or cost function).
If $\mathcal{X} $ is finite we call the optimization problem discrete. If $\mathcal{X} \subset \R ^d$ we call the optimization problem continuous.
We refer to all elements of the constraint set as feasible. We refer to an element $x \in \mathcal{X} $ of the constraint set as optimal if $f(x) = \inf_{z \in \mathcal{X} }f(z)$. We also refer to optimal elements as solutions of the optimization problem.
It is common for $f$ and $\mathcal{X} $ to depend on some other, known, given objects. In this case, these objects are often called parameters or problem data.
We often write optimization problems as
\[
\begin{aligned}
\text{minimize}\quad & f(x) \\
\text{subject to}\quad & x \in \mathcal{X} .
\end{aligned}
\]
It is common to let $f: \mathcal{X} \to \Rbar$, and allow there to exist $x \in \mathcal{X} $ for which $f(x) = \infty$. This technique can be used to embed further constraints in the objective. For example, we interpret $f(x) = +\infty$ to mean $x$ is infeasible.
If we have some function $g: \mathcal{X} \to \Rbar$ that we wish to maximize, we can always convert it to an optimization problem by defining $f: \mathcal{X} \to \Rbar$ by $f(x) = -g(x)$. In this case $g$ is often called a reward (or utility, profit).
A solver (or solution method, solution algorithm) for a family of optimization problems is a function $S$ mapping optimization problems to solutions.
Loosely speaking, the difficulty of “solving” the optimization problem $(\mathcal{X} , f)$ depends on the properties of $\mathcal{X} $ and $f$ and the problem “size”. For example, when $\mathcal{X} \subset \R ^d$ the difficulty is related to the “dimension” $d$ of $x \in \mathcal{X} $.