The NLPC Nonlinear Optimization Solver |
The following introductory examples illustrate how to get started using the NLPC solver and also provide basic information about the use of PROC OPTMODEL.
Consider the following example of minimizing the Rosenbrock function (Rosenbrock 1960):
The following PROC OPTMODEL statements can be used to solve this problem:
proc optmodel; number a = 100; var x{1..2}; min f = a*(x[2] - x[1]^2)^2 + (1 - x[1])^2; solve with nlpc / tech=newtyp; print x; quit;
The VAR statement declares the decision variables and . The MIN statement identifies the symbol that defines the objective function in terms of and . The TECH=NEWTYP option in the SOLVE statement specifies that the Newton-type method with line search is used to solve this problem. Finally, the PRINT statement is specified to display the solution to this problem.
The output that summarizes the problem characteristics and the solution obtained by the
solver are displayed in Figure 10.1. Note that the solution has and
, and an objective value very close to .
Decision variables often have bound constraints of the form
Consider the following bound constrained problem (Hock and Schittkowski 1981, Example 5):
Given a starting point at , there is a local minimum at
proc optmodel; set S = 1..2; number lb{S} = [-1.5 -3]; number ub{S} = [4 3]; number x0{S} = [0 0]; var x{i in S} >= lb[i] <= ub[i] init x0[i]; min obj = sin(x[1] + x[2]) + (x[1] - x[2])^2 - 1.5*x[1] + 2.5*x[2] + 1; solve with nlpc / printfreq=1; print x; quit;
The starting point is specified with the keyword INIT in the VAR statement. As usual, the MIN statement identifies the objective function. Since there is no explicit optimization technique specified (using the TECH= option), the NLPC solver uses the trust region method, which is the default algorithm for problems of this size. The PRINTFREQ= option is used to display the iteration log during the optimization process.
In Figure 10.2, the problem is summarized at the top. Then, the details of the
iterations are displayed. A message is printed to indicate that the default optimality
criteria (ABSOPTTOL=0.001, RELOPTTOL=1.0E-6) were satisfied. (See the section "Optimality Control"
for more information.) A summary of the solution shows that the trust region method was
used for the optimization and the solution found is optimal. It also shows the optimal
objective value and the number of iterations taken to find the solution. The optimal
solution is displayed at the end.
|
More general linear equality or inequality constraints have the form
Consider, for example, Rosenbrock's post office problem (Schittkowski 1987, p. 74):
Starting from , you can reach a minimum at , with a corresponding objective value . You can use the following SAS code to formulate and solve this problem:
proc optmodel; number ub{1..3} = [20 11 42]; var x{i in 1..3} >= 0 <= ub[i] init 10; min f = -1*x[1]*x[2]*x[3]; con c1: x[1] + 2*x[2] + 2*x[3] >= 0; con c2: 72 - x[1] - 2*x[2] - 2*x[3] >= 0; solve with nlpc / tech=congra printfreq=1; print x; quit;
As usual, the VAR statement specifies the bounds on the variables, and the starting point; the MIN statement identifies the objective function. In addition, the two CON statements describe the linear constraints and . To solve this problem, select the conjugate gradient optimization technique by using the TECH=CONGRA option. The PRINTFREQ= option is used to display the iteration log.
In Figure 10.3, you can find a problem summary and the iteration log. The solution
summary and the solution are printed at the bottom.
|
You can formulate linear constraints in a more compact manner. Consider the following example (Hock and Schittkowski 1981, test example 24):
The minimum function value is at . Assume a feasible starting point, .
You can specify this model by using the following PROC OPTMODEL statements:
proc optmodel; number a{1..3, 1..2} = [ .57735 -1 1 1.732 -1 -1.732 ]; number b{1..3} = [ 0 0 -6 ]; number x0{1..2} = [ 1 .5 ]; var x{i in 1..2} >= 0 init x0[i]; min f = ((x[1] - 3)^2 - 9) * x[2]^3 / (27*sqrt(3)); con cc {i in 1..3}: sum{j in 1..2} a[i,j]*x[j] >= b[i]; solve with nlpc / printfreq=1; print x; quit;
Note that instead of writing three individual linear constraints as in Rosenbrock's post office problem, we use a two-dimensional array to represent the coefficient matrix of the linear constraints and a one-dimensional array for the right-hand side. Consequently, all three linear constraints are represented in a single CON statement. This method is especially useful for larger models and for models in which the constraint coefficients are subject to change.
The output showing the problem summary, the iteration log, the solution summary, and the
solution is displayed in Figure 10.4.
|
General nonlinear equality or inequality constraints have the form
Consider the following nonlinearly constrained problem (Avriel 1976, p. 456):
proc optmodel; num x0{1..2} = [2 0]; var x{i in 1..2} init x0[i]; min f = (x[1] - 4)^2 + (x[2] - 4)^2; con c1: 3*x[1]^2 + x[2]^2 - 2*x[1]*x[2] - 4*x[1] <= 12; con c2: 3*x[1] + 4*x[2] <= 28; solve with nlpc / tech=qne printfreq=1; print x; quit;
Note that is a nonlinear constraint and is a linear constraint. Both can be specified by using the CON statement. The PROC OPTMODEL modeling language automatically recognizes types of constraints to which they belong. The experimental quasi-Newton method is requested to solve this problem.
A problem summary is shown in Figure 10.5. Figure 10.6 displays the
iteration log. Note that the quasi-Newton method is an infeasible point algorithm; i.e.,
the iterates remain infeasible to the nonlinear constraints until the optimal solution
is found. The column "Maximum Constraint Violation" displays the infeasibility.
The solution summary and the solution are shown in Figure 10.7.
|
|
|
Copyright © 2008 by SAS Institute Inc., Cary, NC, USA. All rights reserved.