General Statistics Examples


Example 9.2 Newton’s Method for Solving Nonlinear Systems of Equations

This example solves a nonlinear system of equations by Newton’s method. Let the nonlinear system be represented by

\[  F(\mb{x}) = 0  \]

where $\mb{x}$ is a vector and $F$ is a vector-valued nonlinear function.

Newton’s method is an iterative technique. The method starts with an initial estimate $\mb{x}_0$ of the root. The estimate is refined iteratively in an attempt to find a root of $F$. Given an estimate $\mb{x}_ n$, the next estimate is given by

\[  \mb{x}_{n+1} = \mb{x}_ n - \mb{J}^{-1}(\mb{x}_ n) F(\mb{x}_ n)  \]

where $\mb{J}(\mb{x})$ is the Jacobian matrix of partial derivatives of $F$ with respect to $\mb{x}$. (For more efficient computations, use the built-in NLPNRA subroutine .)

For optimization problems, the same method is used, where $F(\mb{x})$ is the gradient of the objective function and $\mb{J}(\mb{x})$ becomes the Hessian (Newton-Raphson).

In this example, the system to be solved is

\begin{eqnarray*}  \mb{x_1} + \mb{x_2} - \mb{x_1}\mb{x_2} + 2 &  = &  0 \\ \mb{x_1} \exp (-\mb{x_2}) - 1 &  = &  0 ~  \end{eqnarray*}

The following statements are organized into three modules, NEWTON, FUN, and DERIV:

/**********************************************************/
/*     Newton's Method to Solve a Nonlinear Function      */
/* The user must supply initial values,                   */
/* and the FUN and DERIV functions.                       */
/* On entry: FUN evaluates the function f in terms of x   */
/* initial values are given to x                          */
/* DERIV evaluates Jacobian J                             */
/* Tuning parameters: CONVERGE, MAXITER.                  */
/* On exit: return x such that FUN(x) is close to 0       */
proc iml;
/* User-supplied function evaluation */
start Fun(x);
   x1 = x[1];  x2 = x[2];           /* extract components */
   f1 = x1 + x2 - x1*x2 + 2;
   f2 = x1*exp(-x2) - 1;
   return( f1 // f2 );
finish Fun;

/* User-supplied derivatives of the function */
start Deriv(x);
   x1 = x[1]; x2 = x[2];
   df1dx1 = 1 - x2;
   df1dx2 = 1 - x1;
   df2dx1 = exp(-x2);
   df2dx2 = -x1 * exp(-x2);
   J = (df1dx1 || df1dx2 ) //
       (df2dx1 || df2dx2 );
   return( J );                    /* Jacobian matrix */
finish Deriv;

/* Implementation of Newton's method with default arguments */
/* By default, maximum iterations (maxiter) is 25           */
/*             convergence criterion (converge) is 1e-6     */
start NewtonMethod(x0, maxIter=25, converge=1e-6);
   x = x0;
   f = Fun(x);       /* evaluate function at starting values */
   do iter = 1 to maxiter           /* iterate until maxiter */
      while(max(abs(f))>converge);         /* or convergence */
      J = Deriv(x);                  /* evaluate derivatives */
      delta = -solve(J, f);   /* solve for correction vector */
      x = x + delta;                /* the new approximation */
      f = fun(x);                   /* evaluate the function */
   end;
   /* return missing if no convergence */
   if iter > maxIter then
      x = j(nrow(x0),ncol(x0),.);
   return( x );
finish NewtonMethod;

print "Solve the system: X1+X2-X1*X2+2=0, X1*EXP(-X2)-1=0" ,;
x0 = {.1, -2};     /* starting values */
x = NewtonMethod(x0);
f = Fun(x);
print x f;

Output 9.2.1: Newton’s Method: Results

Solve the system: X1+X2-X1*X2+2=0, X1*EXP(-X2)-1=0

x f
0.0977731 5.3523E-9
-2.325106 6.1501E-8



The results are shown in Output 9.2.1. Notice that the NEWTONMETHOD function was called with only a single argument, which causes the module to use the default number of iterations and the default convergence criterion. To change those parameter values, call the module with additional arguments, as follows:

x = NewtonMethod(x0, 15, 0.001);