Convex optimization

Convex optimization

Convex minimization, a subfield of optimization, studies the problem of minimizing convex functions over convex sets. Given a real vector space X together with a convex, real-valued function

f:\mathcal{X}\to \mathbb{R}

defined on a convex subset \mathcal{X} of X, the problem is to find a point x^\ast in \mathcal{X} for which the number f(x) is smallest, i.e., a point x^\ast such that

f(x^\ast) \le f(x) for all x \in \mathcal{X}. Advanced treatments consider convex functions that can attain positive infinity, also; the indicator function of convex analysis is zero for every x\in\mathcal{X} and positive infinity otherwise.

The convexity of f makes the powerful tools of convex analysis applicable: the Hahn–Banach theorem and the theory of subgradients lead to a particularly satisfying theory of necessary and sufficient conditions for optimality, a duality theory generalizing that for linear programming, and effective computational methods.

Convex minimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, statistics (optimal design), and finance. With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming. Many optimization problems can be reformulated as convex minimization problems. For example, the problem of maximizing a concave function f can be re-formulated equivalently as a problem of minimizing the function -f, which is convex.

Besides convex minimization, the field of convex optimization also considers the far more difficult problem of maximizing convex functions:

Extensions of convex functions include pseudo-convex and quasi-convex functions. Partial extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity ("abstract convex analysis").

Contents

Theory

The following statements are true about the convex minimization problem:

  • if a local minimum exists, then it is a global minimum.
  • the set of all (global) minima is convex.
  • for each strictly convex function, if the function has a minimum, then the minimum is unique.

These results are used by the theory of convex minimization along with geometric notions from functional analysis such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.

Standard form

Standard form is the usual and most intuitive form of describing a convex minimization problem. It consists of the following three parts:

  • A convex function f(x): \mathbb{R}^n \to \mathbb{R} to be minimized over the variable x
  • Inequality constraints of the form g_i(x) \leq 0, where the functions gi are convex
  • Equality constraints of the form hi(x) = 0, where the functions hi are affine. In practice, the terms "linear" and "affine" are often used interchangeably. Such constraints can be expressed in the form h_i(x) = a_i^T x + b_i, where ai and bi are column-vectors..

A convex minimization problem is thus written as

\begin{align}
&\underset{x}{\operatorname{minimize}}& & f(x) \\
&\operatorname{subject\;to}
& &g_i(x) \leq 0, \quad i = 1,\dots,m \\
&&&h_i(x) = 0, \quad i = 1, \dots,p.
\end{align}

Note that every equality constraint h(x) = 0 can be equivalently replaced by a pair of inequality constraints h(x)\leq 0 and -h(x)\leq 0. Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.

Following from this fact, it is easy to understand why hi(x) = 0 has to be affine as opposed to merely being convex. If hi(x) is convex, h_i(x) \leq 0 is convex, but -h_i(x) \leq 0 is concave. Therefore, the only way for hi(x) = 0 to be convex is for hi(x) to be affine.

Examples

The following problems are all convex minimization problems, or can be transformed into convex minimizations problems via a change of variables:

Lagrange multipliers

Consider a convex minimization problem given in standard form by a cost function f(x) and inequality constraints g_i(x)\leq 0, where i=1\ldots m. Then the domain \mathcal{X} is:

\mathcal{X} = \left\lbrace{x\in X \vert g_1(x)\le0, \ldots, g_m(x)\le0}\right\rbrace.

The Lagrangian function for the problem is

L(x,λ0,...,λm) = λ0f(x) + λ1g1(x) + ... + λmgm(x).

For each point x in X that minimizes f over X, there exist real numbers λ0, ..., λm, called Lagrange multipliers, that satisfy these conditions simultaneously:

  1. x minimizes L(y, λ0, λ1, ..., λm) over all y in X,
  2. λ0 ≥ 0, λ1 ≥ 0, ..., λm ≥ 0, with at least one λk>0,
  3. λ1g1(x) = 0, ..., λmgm(x) = 0 (complementary slackness).

If there exists a "strictly feasible point", i.e., a point z satisfying

g1(z) < 0,...,gm(z) < 0,

then the statement above can be upgraded to assert that λ0=1.

Conversely, if some x in X satisfies 1-3 for scalars λ0, ..., λm with λ0 = 1, then x is certain to minimize f over X.

Methods

Convex minimization problems can be solved by the following contemporary methods:[2]

  • "Bundle methods" (Wolfe, Lemaréchal), and
  • Subgradient projection methods (Polyak),
  • Interior-point methods (Nemirovskii and Nesterov).

Other methods of interest:

Subgradient methods can be implemented simply and so are widely used.[3]

Maximizing convex functions

Besides convex minimization, the field of convex optimization also considers the far more difficult problem of maximizing convex functions:

Solving even close-to-convex problems can be computationally difficult. The problem of minimizing a quadratic multivariate polynomial on a cube is NP-hard.[5] In fact, in the quadratic minimization problem, if the matrix has only one negative eigenvalue, the problem is NP-hard.[6]

Generalized convexity

Extensions of convex functions include pseudo-convex and quasi-convex functions. Partial extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity ("abstract convex analysis").

Software

Although most general-purpose nonlinear equation solvers such as LSSOL, LOQO, MINOS, and Lancelot work well, many software packages dealing exclusively with convex minimization problems are also available:

Convex programming languages

Convex minimization solvers

  • MOSEK (commercial, stand-alone software and Matlab interface)
  • solver.com (commercial)
  • SeDuMi (GPLv2, Matlab package)
  • SDPT3 (GPLv2, Matlab package)
  • OBOE

See also

References

  1. ^ Theorem 32.1 in Rockafellar's Convex Analysis states this maximum principle for extended real-valued functions.
  2. ^ For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczynski and Boyd and Vandenberghe (interior point).
  3. ^ Bertsekas
  4. ^ Theorem 32.1 in Rockafellar's Convex Analysis states this maximum principle for extended real-valued functions.
  5. ^ Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.
  6. ^ Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global Optimization, Volume 1, Number 1, 1991, pg.15-22.
  • Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
  • Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
  • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume I: Fundamentals. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 305. Berlin: Springer-Verlag. pp. xviii+417. ISBN 3-540-56850-6. 
  • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 306. Berlin: Springer-Verlag. pp. xviii+346. ISBN 3-540-56852-2. 
  • Kiwiel, Krzysztof C. (1985). Methods of Descent for Nondifferentiable Optimization. Lecture Notes in Mathematics. New York: Springer-Verlag. ISBN 9783540156420, ISBN 3540156429. 
  • Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger and Denis Naddef. Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN 3-540-42877-1. MR1900016. 
  • Nesterov, Y. and Nemirovsky, A. (1994). 'Interior Point Polynomial Methods in Convex Programming. SIAM
  • Nesterov, Yurii. (2004). Introductory Lectures on Convex Optimization, Kluwer Academic Publishers
  • Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press. 
  • Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton University Press. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Convex function — on an interval. A function (in black) is convex if and only i …   Wikipedia

  • Optimization (mathematics) — In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an… …   Wikipedia

  • Convex cone — In linear algebra, a convex cone is a subset of a vector space over an ordered field that is closed under linear combinations with positive coefficients. A convex cone (light blue). Inside of it, the light red convex cone consists of all points… …   Wikipedia

  • Optimization problem — In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or… …   Wikipedia

  • Convex — A convex set. The word convex means curving out or bulging outward, as opposed to concave. Convex or convexity may refer to: Mathematics: Convex set, a set of points containing all line segments between each pair of its points Convex function, a… …   Wikipedia

  • Convex analysis — is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory. See also List of convexity topics References J. B. Hiriart… …   Wikipedia

  • optimization — /op teuh meuh zay sheuhn/ 1. the fact of optimizing; making the best of anything. 2. the condition of being optimized. 3. Math. a mathematical technique for finding a maximum or minimum value of a function of several variables subject to a set of …   Universalium

  • Convex conjugate — In mathematics, convex conjugation is a generalization of the Legendre transformation. It is also known as Legendre–Fenchel transformation or Fenchel transformation (after Adrien Marie Legendre and Werner Fenchel). Contents 1 Definition 2… …   Wikipedia

  • Mathematical optimization — For other uses, see Optimization (disambiguation). The maximum of a paraboloid (red dot) In mathematics, computational science, or management science, mathematical optimization (alternatively, optimization or mathematical programming) refers to… …   Wikipedia

  • Conic optimization — is a subfield of convex optimization that studies a class of structured convex optimization problems called conic optimization problems. A conic optimization problem consists of minimizing a convex function over the intersection of an affine… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”