NLPQUA Call

CALL NLPQUA (rc, xr, quad, x0 <*>, opt <*>, blc <*>, tc <*>, par <*>, "ptit" <*>, lin ) ;

The NLPQUA subroutine computes an optimum value of a quadratic objective function.

See the section Nonlinear Optimization and Related Subroutines for a listing of all NLP subroutines. See Chapter 14 for a description of the arguments of NLP subroutines.

The NLPQUA subroutine uses a fast algorithm for maximizing or minimizing the quadratic objective function

\[  \frac{1}{2} x^ T\mb {G}x + g^ Tx + \mbox{\Emph{con}}  \]

subject to boundary constraints and general linear equality and inequality constraints. The algorithm is memory-consuming for problems with general linear constraints.

The matrix $\mb {G}$ must be symmetric but not necessarily positive definite (or negative definite for maximization problems). The constant term con affects only the value of the objective function, not its derivatives or the optimal point $x^*$.

The algorithm is an active-set method in which the update of active boundary and linear constraints is done separately. The $QT$ decomposition of the matrix $A_ k$ of active linear constraints is updated iteratively (Gill et al., 1984). If $n_ f$ is the number of free parameters (that is, $n$ minus the number of active boundary constraints) and $n_ a$ is the number of active linear constraints, then $\mb {Q}$ is an $n_ f \times n_ f$ orthogonal matrix that contains null space $Z$ in its first $n_ f-n_ a$ columns and range space $Y$ in its last $n_ a$ columns. The matrix $\mb {T}$ is an $n_ a \times n_ a$ triangular matrix of the form $t_{ij}=0$ for $i < n-j$. The Cholesky factor of the projected Hessian matrix $Z^ T_ k\mb {G}Z_ k$ is updated simultaneously with the $QT$ decomposition when the active set changes.

The objective function is specified by the input arguments quad and lin, as follows:

  • The quad argument specifies the symmetric $n \times n$ Hessian matrix, $\mb {G}$, of the quadratic term. The input can be in dense or sparse form. In dense form, all $n^2$ entries of the quad matrix must be specified. If $n\leq 3$, the dense specification must be used. The sparse specification can be useful when $\mb {G}$ has many zero elements. You can specify an $nn \times 3$ matrix in which each row represents one of the $nn$ nonzero elements of $\mb {G}$. The first column specifies the row location in $\mb {G}$, the second column specifies the column location, and the third column specifies the value of the nonzero element.

  • The lin argument specifies the linear part of the quadratic optimization problem. It must be a vector of length $n$ or $n+1$. If lin is a vector of length $n$, it specifies the vector $g$ of the linear term, and the constant term $con$ is considered zero. If lin is a vector of length $n+1$, then the first $n$ elements of the argument specify the vector $g$ and the last element specifies the constant term $con$ of the objective function.

As in the other optimization subroutines, you can use the blc argument to specify boundary and general linear constraints, and you must provide a starting point x0 to determine the number of parameters. If x0 is not feasible, a feasible initial point is computed by linear programming, and the elements of x0 can be missing values.

Assuming nonnegativity constraints $x \geq 0$, the quadratic optimization problem is solved with the LCP call, which solves the linear complementarity problem.

Choosing a sparse (or dense) input form of the quad argument does not mean that the algorithm used in the NLPQUA subroutine is necessarily sparse (or dense). If the following conditions are satisfied, the NLPQUA algorithm stores and processes the matrix $\mb {G}$ as sparse:

  • No general linear constraints are specified.

  • The memory needed for the sparse storage of $\mb {G}$ is less than 80% of the memory needed for dense storage.

  • $\mb {G}$ is not a diagonal matrix. If $\mb {G}$ is diagonal, it is stored and processed as a diagonal matrix.

The sparse NLPQUA algorithm uses a modified form of minimum degree Cholesky factorization (George and Liu, 1981).

In addition to the standard iteration history, the NLPNRA subroutine prints the following information:

  • The heading alpha is the step size, $\alpha $, computed with the line-search algorithm.

  • The heading slope refers to $g^ Ts$, the slope of the search direction at the current parameter iterate $x^{(k)}$. For minimization, this value should be significantly smaller than zero. Otherwise, the line-search algorithm has difficulty reducing the function value sufficiently.

The Betts problem (see the section Constrained Betts Function) can be expressed as a quadratic problem in the following way:

\[  x = \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right], ~ ~ ~  G = \left[ \begin{array}{cc} 0.02 &  0 \\ 0 &  2 \end{array} \right], ~ ~ ~  g = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right], ~ ~ ~  \mbox{\Emph{con}} = -100  \]

Then

\[  \frac{1}{2} x^ TGx - g^ Tx + \mbox{\Emph{con}} = 0.5[0.02x_1^2 + 2x_2^2] - 100 = 0.01x_1^2 + x_2^2 - 100  \]

The following statements use the NLPQUA subroutine to solve the Betts problem:

lin  = { 0. 0. -100};
quad = {  0.02   0.0 ,
          0.0    2.0 };
c    = {  2. -50.  .   .,
         50.  50.  .   .,
         10.  -1. 1. 10.};
x = { -1. -1.};
opt = {0 2};
call nlpqua(rc, xres, quad, x, opt, c, , , , lin);

The quad argument specifies the $\mb {G}$ matrix, and the lin argument specifies the $g$ vector with the value of con appended as the last element. The matrix c specifies the boundary constraints and the general linear constraint.

The iteration history follows.

Figure 24.240: Quadratic Optimization


Note: Initial point was changed to be feasible for boundary and linear constraints.

Optimization Start
Parameter Estimates
N Parameter Estimate Gradient
Objective
Function
Lower
Bound
Constraint
Upper
Bound
Constraint
1 X1 6.800000 0.136000 2.000000 50.000000
2 X2 -1.000000 -2.000000 -50.000000 50.000000


Value of Objective Function = -98.5376

Linear Constraints
1 59.00000 :   10.0000 <= + 10.0000 * X1 - 1.0000 * X2


Null Space Method of Quadratic Problem

Parameter Estimates 2
Lower Bounds 2
Upper Bounds 2
Linear Constraints 1
Using Sparse Hessian _

Optimization Start
Active Constraints 0 Objective Function -98.5376
Max Abs Gradient Element 2    

Iteration   Restarts Function
Calls
Active
Constraints
  Objective
Function
Objective
Function
Change
Max Abs
Gradient
Element
Step
Size
Slope of
Search
Direction
1   0 2 1   -99.87349 1.3359 0.5882 0.706 -2.925
2   0 3 1   -99.96000 0.0865 0 1.000 -0.173

Optimization Results
Iterations 2 Function Calls 4
Gradient Calls 3 Active Constraints 1
Objective Function -99.96 Max Abs Gradient Element 0
Slope of Search Direction -0.173010381    

ABSGCONV convergence criterion satisfied.

Optimization Results
Parameter Estimates
N Parameter Estimate Gradient
Objective
Function
Active
Bound
Constraint
1 X1 2.000000 0.040000 Lower BC
2 X2 0 0  


Value of Objective Function = -99.96

Linear Constraints Evaluated at Solution
1   10.00000 = -10.0000 + 10.0000 * X1 - 1.0000 * X2