The linear least squares problem arises in the context of determining a solution to an overdetermined set of linear equations. In practice, these equations could arise in data fitting and estimation problems. An overdetermined system of linear equations can be defined as
![\[ \mathbf{Ax} = \mathbf{b} \]](images/ormpug_qpsolver0065.png)
where
,
,
, and
. Since this system usually does not have a solution, you need to be satisfied with some sort of approximate solution. The
most widely used approximation is the least squares solution, which minimizes
.
This problem is called a least squares problem for the following reason. Let
,
, and
be defined as previously. Let
be the kth component of the vector
:
![\[ k_ i(x) = a_{i1}x_1 + a_{i2}x_2 + \cdots + a_{in}x_ n - b_ i,\; i = 1,2,\ldots , m \]](images/ormpug_qpsolver0073.png)
By definition of the Euclidean norm, the objective function can be expressed as follows:
![\[ \| \mathbf{Ax} - \mathbf{b}\| _2^2 = \sum \limits _{i = 1}^ m k_ i(x)^2 \]](images/ormpug_qpsolver0074.png)
Therefore, the function you minimize is the sum of squares of m terms
; hence the term least squares. The following example is an illustration of the linear least squares problem; that is, each of the terms
is a linear function of x .
Consider the following least squares problem defined by
![\[ \mathbf{A} = \left[\begin{array}{rc} 4 & 0 \\ -1 & 1 \\ 3 & 2 \end{array}\right],\quad \mathbf{b} = \left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] \]](images/ormpug_qpsolver0076.png)
This translates to the following set of linear equations:
![\[ 4x_1 = 1, \quad -x_1 + x_2 = 0, \quad 3x_1 + 2x_2 = 1 \]](images/ormpug_qpsolver0077.png)
The corresponding least squares problem is:
![\[ \mbox{minimize} \quad (4x_1 -1)^2 + (-x_1 + x_2)^2 + (3x_1 + 2x_2 -1)^2 \]](images/ormpug_qpsolver0078.png)
The preceding objective function can be expanded to:
![\[ \mbox{minimize} \quad 26x_1^2 + 5x_2^2 + 10 x_1x_2 - 14 x_1 -4x_2 + 2 \]](images/ormpug_qpsolver0079.png)
In addition, you impose the following constraint so that the equation
is satisfied within a tolerance of 0.1:
![\[ 0.9 \leq 3x_1 + 2x_2 \leq 1.1 \]](images/ormpug_qpsolver0081.png)
You can use the following SAS statements to solve the least squares problem:
/* example 1: linear least-squares problem */
proc optmodel;
var x1; /* declare free (no explicit bounds) variable x1 */
var x2; /* declare free (no explicit bounds) variable x2 */
/* declare slack variable for ranged constraint */
var w >= 0 <= 0.2;
/* objective function: minimize is the sum of squares */
minimize f = 26 * x1 * x1 + 5 * x2 * x2 + 10 * x1 * x2
- 14 * x1 - 4 * x2 + 2;
/* subject to the following constraint */
con L: 3 * x1 + 2 * x2 - w = 0.9;
solve with qp;
/* print the optimal solution */
print x1 x2;
quit;
The output is shown in Output 11.1.1.
Output 11.1.1: Summaries and Optimal Solution
| Problem Summary | |
|---|---|
| Objective Sense | Minimization |
| Objective Function | f |
| Objective Type | Quadratic |
| Number of Variables | 3 |
| Bounded Above | 0 |
| Bounded Below | 0 |
| Bounded Below and Above | 1 |
| Free | 2 |
| Fixed | 0 |
| Number of Constraints | 1 |
| Linear LE (<=) | 0 |
| Linear EQ (=) | 1 |
| Linear GE (>=) | 0 |
| Linear Range | 0 |
| Constraint Coefficients | 3 |
| Solution Summary | |
|---|---|
| Solver | QP |
| Algorithm | Interior Point |
| Objective Function | f |
| Solution Status | Optimal |
| Objective Value | 0.0095238095 |
| Primal Infeasibility | 1.926295E-12 |
| Dual Infeasibility | 1.3056209E-9 |
| Bound Infeasibility | 0 |
| Duality Gap | 4.3533121E-8 |
| Complementarity | 1.3603398E-7 |
| Iterations | 3 |
| Presolve Time | 0.00 |
| Solution Time | 0.01 |