Nonlinear Optimization Examples

Kuhn-Tucker Conditions

The nonlinear programming (NLP) problem with one objective function f and m constraint functions c_i, which are continuously differentiable, is defined as follows:

{minimize} f(x), & & x \in {\cal r}^n, \; { subject to} \   c_i(x) = 0 , & & i = 1, ... ,m_e \   c_i(x) \ge 0 , & & i = m_e+1, ... ,m
In the preceding notation, n is the dimension of the function f(x), and m_e is the number of equality constraints. The linear combination of objective and constraint functions
l(x,\lambda) = f(x) - \sum_{i=1}^m \lambda_i c_i(x)
is the Lagrange function, and the coefficients \lambda_i are the Lagrange multipliers.

If the functions f and c_i are twice differentiable, the point x^* is an isolated local minimizer of the NLP problem, if there exists a vector \lambda^*=(\lambda_1^*,    ... ,\lambda_m^*) that meets the following conditions:

Each nonzero vector y \in {\cal r}^n with

y^t \nabla_x c_i(x^*) = 0 i = 1, ... ,m_e ,\; { and }    \forall i\in {m_e+1, ... ,m}; \lambda_i^* \gt 0
satisfies
y^t \nabla_x^2 l(x^*,\lambda^*) y \gt 0

In practice, you cannot expect the constraint functions c_i(x^*) to vanish within machine precision, and determining the set of active constraints at the solution x^* might not be simple.

Previous Page | Next Page | Top of Page