The Quadratic Programming Solver -- Experimental

Example 12.1: Linear Least-Squares Problem

The linear least-squares problem arises in the context of determining a solution to an over-determined set of linear equations. In practice, these could arise in data fitting and estimation problems. An over-determined system of linear equations can be defined as

\mathbf{ax} = \mathbf{b}
where \mathbf{a} \in \mathbb{r}^{m x n}, \mathbf{x}\in \mathbb{r}^n, \mathbf{b} \in \mathbb{r}^m, and m \gt n. Since this system usually does not have a solution, we need to be satisfied with some sort of approximate solution. The most widely used approximation is the least-squares solution, which minimizes \vert \mathbf{ax} - \mathbf{b}\vert _2^2.

This problem is called a least-squares problem for the following reason. Let \mathbf{a}, \mathbf{x}, and \mathbf{b} be defined as previously. Let k_i(x) be the kth component of the vector \mathbf{ax} - \mathbf{b}:

k_i(x) = a_{i1}x_1 + a_{i2}x_2 +  ...  + a_{in}x_n - b_i,\; i = 1,2, ... , m
By definition of the Euclidean norm, the objective function can be expressed as follows:
\vert \mathbf{ax} - \mathbf{b}\vert _2^2 = \sum \limits_{i = 1}^m k_i(x)^2
Therefore the function we minimize is the sum of squares of m terms k_i(x); hence the term least-squares. The following example is an illustration of the linear least-squares problem; i.e., each of the terms k_i is a linear function of x.Consider the following least-squares problem defined by
\mathbf{a} = [4 & 0 \ -1 & 1 \ 3 & 2 ],    \mathbf{b} = [1 \ 0 \ 1 ]
This translates to the following set of linear equations:
4x_1 = 1,  -x_1 + x_2 = 0,  3x_1 + 2x_2 = 1
The corresponding least-squares problem is
{minimize}  (4x_1 -1)^2 + (-x_1 + x_2)^2 + (3x_1 + 2x_2 -1)^2
The preceding objective function can be expanded to
{minimize}  26x_1^2 + 5x_2^2 + 10 x_1x_2 - 14 x_1 -4x_2 + 2
In addition, we impose the following constraint so that the equation 3x_1 + 2x_2 = 1 is satisfied within a tolerance of 0.1:
0.9 \leq 3x_1 + 2x_2 \leq 1.1

You can use the following SAS code to solve the least-squares problem:

    /* example 1: linear least-squares problem */
    proc optmodel;
       var x1; /* declare free (no explicit bounds) variable x1 */
       var x2; /* declare free (no explicit bounds) variable x2 */
       /* declare slack variable for ranged constraint */
       var w >= 0 <= 0.2;
       
       /* objective function: minimize is the sum of squares */
       minimize f = 26 * x1 * x1 + 5 * x2 * x2 + 10 * x1 * x2
           - 14 * x1 - 4 * x2 + 2;
       
       /* subject to the following constraint */
       con L: 3 * x1 + 2 * x2 - w = 0.9;
       
       solve with qp;
       
       /* print the optimal solution */
       print x1 x2;
    quit;
 

The output is shown in Output 12.1.1.

Output 12.1.1: Summaries and Optimal Solution
The OPTMODEL Procedure

Problem Summary
Objective Sense Minimization
Objective Function f
Objective Type Quadratic
   
Number of Variables 3
Bounded Above 0
Bounded Below 0
Bounded Below and Above 1
Free 2
Fixed 0
   
Number of Constraints 1
Linear LE (<=) 0
Linear EQ (=) 1
Linear GE (>=) 0
Linear Range 0



The OPTMODEL Procedure

Solution Summary
Solver QP
Objective Function f
Solution Status Optimal
Objective Value 0.0095238096
Iterations 11
   
Primal Infeasibility 1.449273E-11
Dual Infeasibility 0
Bound Infeasibility 0
Duality Gap 4.3804698E-7
Complementarity 1.3805642E-6

x1 x2
0.23809 0.1619



Previous Page | Next Page | Top of Page