The Interior Point Nonlinear Programming Solver -- Experimental

Basic Definitions and Notation

The gradient of a function f: \mathbb{r}^n \mapsto \mathbb{r} is the vector of all the first partial derivatives of f, and is denoted by

\nabla f(x) = (\frac{\partial f}{\partial x_{1}}, \frac{\partial f}{\partial x_{2}}, ..., \frac{\partial f}{\partial x_{n}})^{\rm t}
where the superscript T denotes the transpose of a vector.

The Hessian matrix of f, denoted by \nabla^2 f(x), or simply by h(x), is an n x n symmetric matrix whose (i, j) element is the second partial derivative of f(x) with respect to x_{i} and x_{j}. That is, h_{i,j}(x) = \frac{\partial^2 f(x)}{\partial x_{i} \partial x_{j}}.

Consider the vector function, c: r^n \mapsto r^{p+q}, whose first p elements are the equality constraint functions h_{i}(x), i=1,2,..,p, and whose last q elements are the inequality constraint functions g_{i}(x), i=1,2,..,q. That is,

c(x) = (h(x) : g(x))^{\rm t} = (h_{1}(x), ..., h_{p}(x) : g_{1}(x), ..., g_{q}(x))^{\rm t}

The n x (p+q) matrix whose ith column is the gradient of the ith element of c(x) is called the Jacobian matrix of c(x) (or simply the Jacobian of the NLP problem) and is denoted by j(x). We can also use j_{h}(x) to denote the n x p Jacobian matrix of the equality constraints and use j_{g}(x) to denote the n x q Jacobian matrix of the inequality constraints. It is easy to see that j(x) = (j_{h}(x) : j_{g}(x)).

Previous Page | Next Page | Top of Page