The Nonlinear Programming Solver

Basic Definitions and Notation

The gradient of a function $f: \mathbb {R}^{n} \mapsto \mathbb {R}$ is the vector of all the first partial derivatives of f and is denoted by

\[  \nabla f(x) = \left(\frac{\partial f}{\partial x_{1}}, \frac{\partial f}{\partial x_{2}}, \ldots , \frac{\partial f}{\partial x_{n}} \right)^\mr {T}  \]

where the superscript T denotes the transpose of a vector.

The Hessian matrix of f, denoted by $\nabla ^{2} f(x)$, or simply by $H(x)$, is an $n \times n$ symmetric matrix whose $(i, j)$ element is the second partial derivative of $f(x)$ with respect to $x_{i}$ and $x_{j}$. That is, $H_{i,j}(x) = \frac{\partial ^2 f(x)}{\partial x_{i} \partial x_{j}}$.

Consider the vector function, $c: \mathbb {R}^{n} \mapsto \mathbb {R}^{p+q}$, whose first p elements are the equality constraint functions $h_{i}(x), i=1,2,\dots ,p$, and whose last q elements are the inequality constraint functions $g_{i}(x), i=1,2,\dots ,q$. That is,

\[  c(x) = (h(x) : g(x))^\mr {T} = (h_{1}(x), \ldots , h_{p}(x) : g_{1}(x), \ldots , g_{q}(x))^\mr {T}  \]

The $(p+q) \times n$ matrix whose ith row is the gradient of the ith element of $c(x)$ is called the Jacobian matrix of $c(x)$ (or simply the Jacobian of the NLP problem) and is denoted by $J(x)$. You can also use $J_{h}(x)$ to denote the $p \times n$ Jacobian matrix of the equality constraints and use $J_{g}(x)$ to denote the $q \times n$ Jacobian matrix of the inequality constraints. It is easy to see that