The Unconstrained Nonlinear Programming Solver

Line-Search Algorithm

The NLPU solver implements an inexact line-search algorithm. Given a search direction d\in\mathbb{r}^n (produced, e.g., by a quasi-Newton method), each iteration of the line search selects an appropriate step length \alpha, which would in some sense approximate \alpha^\ast, an optimal solution to the following problem:

\alpha^\ast = \arg\;\min_{\alpha\ge 0}\; f(x + \alpha d)

Let us define \phi(\alpha) as

\phi(\alpha)\equiv f(x + \alpha d)

During subsequent line-search iterations, objective function values \phi (\alpha_{i - 1}), \phi (\alpha_i) and their gradients \phi'(\alpha_{i - 1}), \phi'(\alpha_i) are used to construct a cubic polynomial interpolation, whose minimizer \alpha_{i + 1} over [\alpha_{i - 1}, \alpha_{i}] gives the next iterate step length.

An early (economical) line-search termination criterion is given by strong Wolfe conditions:

f(x + \alpha d) &\le& f(x) + c_1 \alpha \langlef'(x),d\rangle \    | \langlef'(x + \alpha d),d\rangle | &\le& c_2 | \langlef'(x),d\rangle |
where c_1 is a sufficient decrease condition constant, known as Armijo's constant, and c_2 is a strong curvature condition constant, known as Wolfe's constant. If f(\cdot) is bounded below, and d is a descent direction at x (such that \langlef'(x),d\rangle \lt 0), then there is always a step length \alpha^\ast, which satisfies strong Wolfe conditions indicated previously.

Previous Page | Next Page | Top of Page