The Nonlinear Programming Solver

Interior Point Algorithm

Primal-dual interior point methods can be classified into two categories: feasible and infeasible. The first category requires that the starting point and all subsequent iterations of the algorithm strictly satisfy all the inequality constraints. The second category relaxes those requirements and allows the iterations to violate some or all of the inequality constraints during the course of the minimization procedure. The NLP solver implements an infeasible algorithm; this section concentrates on that type of algorithm.

To make the notation less cluttered and the fundamentals of interior point methods easier to understand, consider without loss of generality the following simpler NLP problem:

\[  \begin{array}{ll} \displaystyle \mathop \textrm{minimize}&  f(x) \\ \textrm{subject\  to}&  g_{i}(x) \ge 0, i \in \mathcal{I} = \{  1,2, \ldots , q \}  \end{array}  \]

Note that the equality and bound constraints have been omitted from the preceding problem. Initially, slack variables are added to the inequality constraints, giving rise to the problem

\[  \begin{array}{ll} \displaystyle \mathop \textrm{minimize}&  f(x) \\ \textrm{subject\  to}&  g_{i}(x) - s_{i} = 0, i \in \mathcal{I} \\ &  s \ge 0 \end{array}  \]

where $s = (s_{1}, \ldots , s_{q})^\mr {T}$ is the vector of slack variables, which are required to be nonnegative. Next, all the nonnegativity constraints on the slack variables are eliminated by being incorporated into the objective function, by means of a logarithmic function. This gives rise to the equality-constrained NLP problem

\[  \begin{array}{ll} \displaystyle \mathop \textrm{minimize}&  B(x,s) = f(x) - \mu \sum _{i\in \mathcal{I}} \ln (s_{i}) \\ \textrm{subject\  to}&  g_{i}(x) - s_{i} = 0, i \in \mathcal{I} \end{array}  \]

where $\mu $ is a positive parameter. The nonnegativity constraints on the slack variables are implicitly enforced by the logarithmic functions, since the logarithmic function prohibits s from taking zero or negative values.

Next, the equality constraints can be absorbed by using a quadratic penalty function to obtain the following:

\[  \begin{array}{ll} \displaystyle \mathop \textrm{minimize}&  \mathcal{M}(x,s) = f(x) + \dfrac {1}{2\mu } \| g(x) -s\| _2^2 - \mu \sum _{i\in \mathcal{I}} \ln (s_{i}) \end{array}  \]

The preceding unconstrained problem is often called the penalty-barrier subproblem. Depending on the size of the parameter $\mu $, a local minimum of the barrier problem provides an approximation to the local minimum of the original NLP problem. The smaller the size of $\mu $, the better the approximation becomes. Infeasible primal-dual interior point algorithms repeatedly solve the penalty-barrier problem for different values of $\mu $ that progressively go to zero, in order to get as close as possible to a local minimum of the original NLP problem.

An unconstrained minimizer of the penalty-barrier problem must satisfy the equations

\[  \begin{array}{ccccc} \nabla f(x) - J(x)^\mr {T} z & =&  0 \\ z -\mu S^{-1} e & =&  0 \end{array}  \]

where $z= -(g(x) - s)/\mu ,$ $J(x)$ is the Jacobian matrix of the vector function $g(x)$, S is the diagonal matrix whose elements are the elements of the vector s (that is, $S = \mbox{diag}\{  s_{1}, \ldots , s_{q} \} $), and e is a vector of all ones. Multiplying the second equation by S and adding the definition of z as a third equation produces the following equivalent nonlinear system:

\[  F^{\mu }(x,s,z) = \left(\begin{array}{ccccc} \nabla f(x) - J(x)^\mr {T} z \\ Sz - e \\ g(x) - s + \mu z \end{array}\right) = 0  \]

Note that if $\mu = 0$, the preceding conditions represent the optimality conditions of the original optimization problem, after adding slack variables. One of the main aims of the algorithm is to gradually reduce the value of $\mu $ to zero, so that it converges to a local optimum of the original NLP problem. The rate by which $\mu $ approaches zero affects the overall efficiency of the algorithm. Algorithms that treat z as an additional variable are considered primal-dual, while those that enforce the definition of $z=-(g(x)-s)/\mu $ at each iteration are consider purely primal approaches.

At iteration k, the infeasible primal-dual interior point algorithm approximately solves the preceding system by using Newton’s method. The Newton system is

\[  \left[\begin{array}{ccc} H_{\mathcal{L}}(x^{k}, z^{k}) &  0 &  -J(x^{k})^\mr {T} \\ 0 &  Z^{k} &  S^{k} \\ J(x^{k}) &  -I &  \mu I \end{array}\right] \left[\begin{array}{c} \Delta x^{k} \\ \Delta s^{k} \\ \Delta z^{k} \end{array}\right] = - \left[\begin{array}{c} \nabla _{x} f(x^{k}) - J(x^{k})^\mr {T} z \\ -\mu e + S^{k} z^{k} \\ g(x^{k}) - s^{k} + \mu z^{k} \end{array}\right]  \]

where $H_{\mathcal{L}}$ is the Hessian matrix of the Lagrangian function $\mathcal{L}(x,z) = f(x) - z^\mr {T}g(x)$ of the original NLP problem; that is,

\[  H_{\mathcal{L}} (x,z) = \nabla ^{2} f(x) - \sum _{i\in \mathcal{I}} z_{i} \nabla ^{2} g_{i}(x)  \]

The solution $(\Delta x^{k}, \Delta s^{k}, \Delta z^{k})$ of the Newton system provides a direction to move from the current iteration $(x^{k},s^{k},z^{k})$ to the next,

\[  (x^{k+1},s^{k+1},z^{k+1}) = (x^{k},s^{k},z^{k}) + \alpha (\Delta x^{k}, \Delta s^{k}, \Delta z^{k})  \]

where $\alpha $ is the step length along the Newton direction. The step length is determined through a line-search procedure that ensures sufficient decrease of a merit function based on the augmented Lagrangian function of the barrier problem. The role of the merit function and the line-search procedure is to ensure that the objective and the infeasibility reduce sufficiently at every iteration and that the iterations approach a local minimum of the original NLP problem.