Activeset methods differ from interior point methods in that no barrier term is used to ensure that the algorithm remains interior with respect to the inequality constraints. Instead, attempts are made to learn the true active set. For simplicity, use the same initial slack formulation used by the interior point method description,

where is the vector of slack variables, which are required to be nonnegative. Begin by absorbing the equality constraints as before into a penalty function, but keep the slack bound constraints explicitly:

where is a positive parameter. Given a solution pair for the preceding problem, you can define the activeset projection matrix P as follows:

Then is also a solution of the equality constraint subproblem:

The minimizer of the preceding subproblem must be a stationary point of the Lagrangian function

which gives the optimality equations

where . Using the second equation, you can simplify the preceding equations to get the following optimality conditions for the boundconstrained penalty subproblem:

Using the third equation directly, you can reduce the system further to

At iteration k, the primaldual activeset algorithm approximately solves the preceding system by using Newton’s method. The Newton system is

where and denotes the Hessian of the Lagrangian function . The solution of the Newton system provides a direction to move from the current iteration to the next,

where is the step length along the Newton direction. The corresponding slack variable update is defined as the solution to the following subproblem whose solution can be computed analytically:

The step length is then determined in a similar manner to the preceding interior point approach. At each iteration, the definition of the activeset projection matrix P is updated with respect to the new value of the constraint function . For largescale NLP, the computational bottleneck typically arises in seeking to solve the Newton system. Thus activeset methods can achieve substantial computational savings when the size of is much smaller than ; however, convergence can be slow if the activeset estimate changes combinatorially. Further, the activeset algorithm is often the superior algorithm when only bound constraints are present. In practice, both the interior point and activeset approach incorporate more sophisticated merit functions than those described in the preceding sections; however, their description is beyond the scope of this document. See Gill and Robinson (2010) for further reading.