The MODEL Procedure |

Numerical Solution Methods |

If the SINGLE option is not used, PROC MODEL computes values that simultaneously satisfy the model equations for the variables named in the SOLVE statement. PROC MODEL provides three iterative methods, Newton, Jacobi, and Seidel, for computing a simultaneous solution of the system of nonlinear equations.

For normalized form equation systems, the solution either can simultaneously satisfy all the equations or can be computed for each equation separately, by using the actual values of the solution variables in the current period to compute each predicted value. By default, PROC MODEL computes a simultaneous solution. The SINGLE option in the SOLVE statement selects single-equation solutions.

Single-equation simulations are often made to produce residuals (which estimate the random terms of the stochastic equations) rather than the predicted values themselves. If the input data and range are the same as that used for parameter estimation, a static single-equation simulation reproduces the residuals of the estimation.

The NEWTON option in the SOLVE statement requests Newton’s method to simultaneously solve the equations for each observation. Newton’s method is the default solution method. Newton’s method is an iterative scheme that uses the derivatives of the equations with respect to the solution variables, , to compute a change vector as

PROC MODEL builds and solves by using efficient sparse matrix techniques. The solution variables * y * at the

where *d* is a damping factor between 0 and 1 chosen iteratively so that

The number of subiterations allowed for finding a suitable *d* is controlled by the MAXSUBITER= option. The number of iterations of Newton’s method allowed for each observation is controlled by MAXITER= option. See Ortega and Rheinbolt (1970) for more details.

The JACOBI option in the SOLVE statement selects a matrix-free alternative to Newton’s method. This method is the traditional nonlinear Jacobi method found in the literature. The Jacobi method as implemented in PROC MODEL substitutes predicted values for the endogenous variables and iterates until a fixed point is reached. Then necessary derivatives are computed only for the diagonal elements of the jacobian, **J**.

If the normalized form equation is

the Jacobi iteration has the form

The Seidel method is an order-dependent alternative to the Jacobi method. The Seidel method is selected by the SEIDEL option in the SOLVE statement. The Seidel method is like the Jacobi method except that in the Seidel method the model is further edited to substitute the predicted values into the solution variables immediately after they are computed. Seidel thus differs from the other methods in that the values of the solution variables are not fixed within an iteration. With the other methods, the order of the equations in the model program makes no difference, but the Seidel method might work much differently when the equations are specified in a different sequence. Note that this fixed point method is the traditional nonlinear Seidel method found in the literature.

The iteration has the form

where is the *j*th equation variable at the *i*th iteration and

If the model is recursive, and if the equations are in recursive order, the Seidel method converges at once. If the model is block-recursive, the Seidel method might converge faster if the equations are grouped by block and the blocks are placed in block-recursive order. The BLOCK option can be used to determine the block-recursive form.

Jacobi and Seidel solution methods support general form equations.

There are two cases where derivatives are (automatically) computed. The first case is for equations with the solution variable on the right-hand side and on the left-hand side of the equation

In this case the derivative of ERROR. with respect to is computed, and the new approximation is computed as

The second case is a system of equations that contains one or more EQ. equations. In this case, a heuristic algorithm is used to make the assignment of a unique solution variable to each general form equation. Use the DETAILS option in the SOLVE statement to print a listing of the assigned variables.

Once the assignment is made, the new approximation is computed as

If is the number of general form equations, then derivatives are required.

The convergence properties of the Jacobi and Seidel solution methods remain significantly poorer than the default Newton’s method.

Newton’s method is the default and should work better than the others for most small- to medium-sized models. The Seidel method is always faster than the Jacobi for recursive models with equations in recursive order. For very large models and some highly nonlinear smaller models, the Jacobi or Seidel methods can sometimes be faster. Newton’s method uses more memory than the Jacobi or Seidel methods.

Both the Newton’s method and the Jacobi method are order-invariant in the sense that the order in which equations are specified in the model program has no effect on the operation of the iterative solution process. In order-invariant methods, the values of the solution variables are fixed for the entire execution of the model program. Assignments to model variables are automatically changed to assignments to corresponding equation variables. Only after the model program has completed execution are the results used to compute the new solution values for the next iteration.

In solving a simultaneous nonlinear dynamic model you might encounter some of the following problems.

For SOLVE tasks, there can be no missing parameter values. Missing right-hand-side variables result in missing left-hand-side variables for that observation.

A solution might exist but be unstable. An unstable system can cause the Jacobi and Seidel methods to diverge.

A model might have well-behaved solutions at each observation but be dynamically unstable. The solution might oscillate wildly or grow rapidly with time.

During the solution process, solution variables can take on values that cause computational errors. For example, a solution variable that appears in a LOG function might be positive at the solution but might be given a negative value during one of the iterations. When computational errors occur, missing values are generated and propagated, and the solution process might collapse.

The following items can cause convergence problems:

There are illegal function values ( for example ).

There are local minima in the model equation.

No solution exists.

Multiple solutions exist.

Initial values are too far from the solution.

The CONVERGE= value is too small.

When PROC MODEL fails to find a solution to the system, the current iteration information and the program data vector are printed. The simulation halts if actual values are not available for the simulation to proceed. Consider the following program, which produces the output shown in Figure 18.82:

data test1; do t=1 to 50; x1 = sqrt(t) ; y = .; output; end; proc model data=test1; exogenous x1 ; control a1 -1 b1 -29 c1 -4 ; y = a1 * sqrt(y) + b1 * x1 * x1 + c1 * lag(x1); solve y / out=sim forecast dynamic ; run;

Error: | Could not reduce norm of residuals in 10 subiterations. |

Error: | The solution failed because 1 equations are missing or have extreme values for observation 1 at NEWTON iteration 1. |

Note: | Additional information on the values of the variables at this observation, which may be helpful in determining the cause of the failure of the solution process, is printed below. |

Observation | 1 | Iteration | 1 | CC | -1.000000 |
---|---|---|---|---|---|

Missing | 1 |

At the first observation, a solution to the following equation is attempted:

There is no solution to this problem. The iterative solution process got as close as it could to making Y negative while still being able to evaluate the model. This problem can be avoided in this case by altering the equation.

In other models, the problem of missing values can be avoided by either altering the data set to provide better starting values for the solution variables or by altering the equations.

You should be aware that, in general, a nonlinear system can have any number of solutions and the solution found might not be the one that you want. When multiple solutions exist, the solution that is found is usually determined by the starting values for the iterations. If the value from the input data set for a solution variable is missing, the starting value for it is taken from the solution of the last period (if nonmissing) or else the solution estimate is started at 0.

The iteration output, produced by the ITPRINT option, is useful in determining the cause of a convergence problem. The ITPRINT option forces the printing of the solution approximation and equation errors at each iteration for each observation. A portion of the ITPRINT output from the following statements is shown in Figure 18.83.

proc model data=test1; exogenous x1 ; control a1 -1 b1 -29 c1 -4 ; y = a1 * sqrt(abs(y)) + b1 * x1 * x1 + c1 * lag(x1); solve y / out=sim forecast dynamic itprint; run;

For each iteration, the equation with the largest error is listed in parentheses after the Newton convergence criteria measure. From this output you can determine which equation or equations in the system are not converging well.

Observation | 1 | Iteration | 0 | CC | 613961.39 | ERROR.y | -62.01010 |
---|

Copyright © 2008 by SAS Institute Inc., Cary, NC, USA. All rights reserved.