If the SINGLE option is not used, PROC MODEL computes values that simultaneously satisfy the model equations for the variables named in the SOLVE statement. PROC MODEL provides three iterative methods, Newton, Jacobi, and Seidel, for computing a simultaneous solution of the system of nonlinear equations.
For normalized form equation systems, the solution either can simultaneously satisfy all the equations or can be computed for each equation separately, by using the actual values of the solution variables in the current period to compute each predicted value. By default, PROC MODEL computes a simultaneous solution. The SINGLE option in the SOLVE statement selects singleequation solutions.
Singleequation simulations are often used to produce residuals (which estimate the random terms of the stochastic equations) rather than the predicted values themselves. If the input data and range are the same as those used for parameter estimation, a static singleequation simulation reproduces the residuals of the estimation.
The NEWTON option in the SOLVE statement requests Newton’s method to simultaneously solve the equations for each observation. Newton’s method is the default solution method. Newton’s method is an iterative scheme that uses the derivatives of the equations with respect to the solution variables, , to compute a change vector as

PROC MODEL builds and solves by using efficient sparse matrix techniques. The solution variables y at the ith iteration are then updated as

where d is a damping factor between 0 and 1 chosen iteratively so that

The number of subiterations that are allowed for finding a suitable d is controlled by the MAXSUBITER= option. The number of iterations of Newton’s method that are allowed for each observation is controlled by MAXITER= option. See Ortega and Rheinbolt (1970) for more details.
The OPTIMIZE option in the SOLVE statement requests that an optimization algorithm be used to minimize a norm of the errors in equations subject to constraints on the solution variables. The OPTIMIZE method is the only solution method that supports constraints on solution variables that are specified using the BOUNDS and RESTRICT statements. Constraints are ignored by the other solution methods. The OPTIMIZE method performs the following optimization:






The norm used in the minimization process is

where the matrix is the covariance of equation errors that is specified by the SDATA= option in the SOLVE statement. If no SDATA= option is specified, the identity matrix is used. Both strict inequality and inequality constraints on the solution variables can be imposed using the BOUNDS or RESTRICT statement. For bounded problems, each lower and upper strict inequality is transformed into an inequality by using the equations




When strict inequality expressions are imposed using the RESTRICT statement, these expressions are transformed into an inequality by using the equation

where is a nonlinear strict inequality constraint. The tolerance is controlled by the EPSILON= option in the SOLVE statement and defaults to . To achieve the best performance from the minimization algorithm, both the first and second analytic derivatives of the equation erorrs with respect to the solution variables are used to compute the gradient and second derivatives of the objective function, . Analytic derivatives of the restriction expressions that are used to specify constraints are also used in the minimization. The gradient of the objective function is

The matrix of second derivatives of the objective function with respect to the solution variables is

where is the number of equations.
The algorithm that is used to find a minimum of subject to bounds on the solution variables employs the interior point technique for nonlinear optimization problems. For further information about this optimization method, see Chapter 8: The Nonlinear Programming Solver in SAS/OR 12.1 User's Guide: Mathematical Programming.
When constraints are active in a solution, the minimum value of the objective function, , is typically greater than 0. The diagnostic quantities that are produced by the OUTOBJVALS and OUTVIOLATIONS options are available to help identify and characterize solutions that have active bounds constraints. The following program contains a boundary constraint that becomes active in steps 6, 8, 10, 12, 13, and 16 of a Monte Carlo simulation:
proc model data=d sdata=s; dependent rate stock; parms theta 0.2 kappa 0.002 sigma 0.4 sinit 1 vol .1; id i; bounds rate >= 0; rate = zlag(rate) + kappa*(theta  zlag(rate)); h.rate = sigma**2 * zlag(rate); eq.stock = log(stock/sinit)  (rate + vol*vol/2); h.stock = vol**2; solve / optimize converge=1e6 seed=1 random=1 out=o outobjvals outviolations; quit; proc print data=o(where=(_objval_>1e6)); run;
Figure 19.84 shows how the OUTOBJVALS option can be used to identify simulation steps with an active bounds constraint, and how the OUTVIOLATIONS option can be used to determine that the RATE equation is not satisfied for those steps.
Figure 19.84: Objective Function and Violation Values
Obs  i  _TYPE_  _MODE_  _REP_  _ERRORS_  _OBJVAL_  rate  stock 

51  6  PREDICT  SIMULATE  1  0  .000363415  0.000027  1.03050 
52  6  VIOL  SIMULATE  1  0  .000363415  0.019073  0.00000 
55  8  PREDICT  SIMULATE  1  0  .000123866  0.000045  1.08828 
56  8  VIOL  SIMULATE  1  0  .000123866  0.011151  0.00000 
59  10  PREDICT  SIMULATE  1  0  .000330766  0.000028  0.96248 
60  10  VIOL  SIMULATE  1  0  .000330766  0.018207  0.00000 
63  12  PREDICT  SIMULATE  1  0  .000034095  0.000086  0.85526 
64  12  VIOL  SIMULATE  1  0  .000034095  0.005895  0.00000 
65  13  PREDICT  SIMULATE  1  0  .000011997  0.000141  1.10514 
66  13  VIOL  SIMULATE  1  0  .000011997  0.003573  0.00000 
71  16  PREDICT  SIMULATE  1  0  .000118982  0.000046  1.07103 
72  16  VIOL  SIMULATE  1  0  .000118982  0.010931  0.00000 
The JACOBI option in the SOLVE statement selects a matrixfree alternative to Newton’s method. This method is the traditional nonlinear Jacobi method found in the literature. The Jacobi method as implemented in PROC MODEL substitutes predicted values for the endogenous variables and iterates until a fixed point is reached. Then necessary derivatives are computed only for the diagonal elements of the jacobian, J.
If the normalized form equation is

the Jacobi iteration has the form

The Seidel method is an orderdependent alternative to the Jacobi method. You select the Seidel method by specifying the SEIDEL option in the SOLVE statement. The Seidel method is like the Jacobi method, except that in the Seidel method the model is further edited to substitute the predicted values into the solution variables immediately after they are computed. The Seidel method thus differs from the other methods in that the values of the solution variables are not fixed within an iteration. With the other methods, the order of the equations in the model program makes no difference, but the Seidel method might work much differently when the equations are specified in a different sequence. This fixedpoint method is the traditional nonlinear Seidel method found in the literature.
The iteration has the form

where is the jth equation variable at the ith iteration and

If the model is recursive, and if the equations are in recursive order, the Seidel method converges at once. If the model is blockrecursive, the Seidel method might converge faster if the equations are grouped by block and the blocks are placed in blockrecursive order. The BLOCK option can be used to determine the blockrecursive form.
Jacobi and Seidel solution methods support general form equations.
There are two cases where derivatives are (automatically) computed. The first case is for equations with the solution variable on the righthand side and on the lefthand side of the equation

In this case the derivative of ERROR. with respect to is computed, and the new approximation is computed as

The second case is a system of equations that contains one or more EQ. equations. In this case, the MODEL procedure assigns a unique solution variable to each equation if such an assignment exists. Use the DETAILS option in the SOLVE statement to print a listing of the assigned variables.
Once the assignment is made, the new approximation is computed as

If is the number of general form equations, then derivatives are required.
The convergence properties of the Jacobi and Seidel solution methods remain significantly poorer than the default Newton’s method.
Newton’s method is the default and should work better than the others for most small to mediumsized models. The Seidel method is always faster than the Jacobi for recursive models with equations in recursive order. For very large models and some highly nonlinear smaller models, the Jacobi or Seidel methods can sometimes be faster. Newton’s method uses more memory than the Jacobi or Seidel methods.
Both the Newton’s method and the Jacobi method are orderinvariant in the sense that the order in which equations are specified in the model program has no effect on the operation of the iterative solution process. In orderinvariant methods, the values of the solution variables are fixed for the entire execution of the model program. Assignments to model variables are automatically changed to assignments to corresponding equation variables. Only after the model program has completed execution are the results used to compute the new solution values for the next iteration.
In solving a simultaneous nonlinear dynamic model you might encounter some of the following problems.
For SOLVE tasks, there can be no missing parameter values. Missing righthandside variables result in missing lefthandside variables for that observation.
A solution might exist but be unstable. An unstable system can cause the Jacobi and Seidel methods to diverge.
A model might have wellbehaved solutions at each observation but be dynamically unstable. The solution might oscillate wildly or grow rapidly with time.
During the solution process, solution variables can take on values that cause computational errors. For example, a solution variable that appears in a LOG function might be positive at the solution but might be given a negative value during one of the iterations. When computational errors occur, missing values are generated and propagated, and the solution process might collapse.
The following items can cause convergence problems:
There are illegal function values ( for example ).
There are local minima in the model equation.
No solution exists.
Multiple solutions exist.
Initial values are too far from the solution.
The CONVERGE= value is too small.
When PROC MODEL fails to find a solution to the system, the current iteration information and the program data vector are printed. The simulation halts if actual values are not available for the simulation to proceed. Consider the following program, which produces the output shown in Figure 19.85:
data test1; do t=1 to 50; x1 = sqrt(t) ; y = .; output; end; proc model data=test1; exogenous x1 ; control a1 1 b1 29 c1 4 ; y = a1 * sqrt(y) + b1 * x1 * x1 + c1 * lag(x1); solve y / out=sim forecast dynamic ; run;
Figure 19.85: SOLVE Convergence Problems
Could not reduce norm of residuals in 10 subiterations. 
The solution failed because 1 equations are missing or have extreme values for observation 1 at NEWTON iteration 1. 
Note:  Additional information on the values of the variables at this observation, which may be helpful in determining the cause of the failure of the solution process, is printed below. 
Observation  1  Iteration  1  CC  1.000000 

Missing  1 
The MODEL Procedure Dynamic SingleEquation Forecast  Listing of Program Data Vector  _N_: 12 ACTUAL.x1: 1.41421 ACTUAL.y: . ERROR.y: . PRED.y: . a1: 1 b1: 29 c1: 4 x1: 1.41421 y: 0.00109 @PRED.y/@y: . @ERROR.y/@y: . 
Note:  Check for missing input data or uninitialized lags. 
Note:  Simulation aborted. 
At the first observation, a solution to the following equation is attempted:

There is no solution to this problem. The iterative solution process got as close as it could to making Y negative while still being able to evaluate the model. This problem can be avoided in this case by altering the equation.
In other models, the problem of missing values can be avoided by either altering the data set to provide better starting values for the solution variables or by altering the equations.
You should be aware that, in general, a nonlinear system can have any number of solutions and the solution found might not be the one that you want. When multiple solutions exist, the solution that is found is usually determined by the starting values for the iterations. If the value from the input data set for a solution variable is missing, the starting value for it is taken from the solution of the last period (if nonmissing) or else the solution estimate is started at 0.
The iteration output, produced by the ITPRINT option, is useful in determining the cause of a convergence problem. The ITPRINT option forces the printing of the solution approximation and equation errors at each iteration for each observation. A portion of the ITPRINT output from the following statements is shown in Figure 19.86.
proc model data=test1; exogenous x1 ; control a1 1 b1 29 c1 4 ; y = a1 * sqrt(abs(y)) + b1 * x1 * x1 + c1 * lag(x1); solve y / out=sim forecast dynamic itprint; run;
For each iteration, the equation with the largest error is listed in parentheses after the Newton convergence criteria measure. From this output you can determine which equation or equations in the system are not converging well.
Figure 19.86: SOLVE, ITPRINT Output
Observation  1  Iteration  0  CC  613961.39  ERROR.y  62.01010 

Predicted Values 

y 
0.0001000 
Iteration Errors 

y 
62.01010 
Observation  1  Iteration  1  CC  50.902771  ERROR.y  61.88684 

Predicted Values 

y 
1.215784 
Iteration Errors 

y 
61.88684 
Observation  1  Iteration  2  CC  0.364806  ERROR.y  41.752112 

Predicted Values 

y 
114.4503 
Iteration Errors 

y 
41.75211 