The Decomposition Algorithm

Overview: Decomposition Algorithm

The SAS/OR decomposition algorithm (DECOMP) provides an alternative method for solving linear programs (LPs) and mixed integer linear programs (MILPs) by exploiting the ability to efficiently solve a relaxation of the original problem. The algorithm is available as an option in the OPTMODEL, OPTLP and OPTMILP procedures and is based on the methodology described in Galati (2009).

A standard linear or mixed integer linear program has the formulation

\[  \begin{array}{rllllll} \mbox{minimize} &  \mathbf{c}^{\top } \mathbf{x} &  + &  \mathbf{f}^{\top } \mathbf{y} \\ \mbox{subject to} &  \mathbf{D x} &  + &  \mathbf{B y} &  \{ \geq , =, \leq \}  &  \mathbf{d} &  \mbox{(master)} \\ &  \mathbf{A x} & & &  \{ \geq , =, \leq \}  &  \mathbf{b} &  \mbox{(subproblem)} \\ &  \multicolumn{5}{l}{l^ x_ i \leq x_ i \leq u^ x_ i, \; \;  x_ i \in \mathbb {Z} \; \;  i \in \mathcal{S}^ x} \\ &  \multicolumn{5}{l}{l^ y_ i \leq y_ i \leq u^ y_ i, \; \;  y_ i \in \mathbb {Z} \; \;  i \in \mathcal{S}^ y} \\ \end{array}  \]

where

$\mathbf{x} \in \mathbb {R}^ n$

is the vector of structural variables

$\mathbf{y} \in \mathbb {R}^ s$

is the vector of master-only structural variables

$\mathbf{c} \in \mathbb {R}^ n$

is the vector of objective function coefficients that are associated with variables $\mathbf{x}$

$\mathbf{f} \in \mathbb {R}^ s$

is the vector of objective function coefficients that are associated with variables $\mathbf{y}$

$\mathbf{D} \in \mathbb {R}^{t \times n}$

is the matrix of master constraint coefficients that are associated with variables $\mathbf{x}$

$\mathbf{B} \in \mathbb {R}^{t \times s}$

is the matrix of master constraint coefficients that are associated with variables $\mathbf{y}$

$\mathbf{A} \in \mathbb {R}^{m \times n}$

is the matrix of subproblem constraint coefficients

$\mathbf{d} \in \mathbb {R}^{t}$

is the vector of master constraints’ right-hand sides

$\mathbf{b} \in \mathbb {R}^{m}$

is the vector of subproblem constraints’ right-hand sides

$\mathbf{l^ x} \in \mathbb {R}^{n}$

is the vector of lower bounds on variables $\mathbf{x}$

$\mathbf{u^ x} \in \mathbb {R}^{n}$

is the vector of upper bounds on variables $\mathbf{x}$

$\mathbf{l^ y} \in \mathbb {R}^{s}$

is the vector of lower bounds on variables $\mathbf{y}$

$\mathbf{u^ y} \in \mathbb {R}^{s}$

is the vector of upper bounds on variables $\mathbf{y}$

${\mathcal S^ x}$

is a subset of the set $\{ 1,\dots ,n\} $ of indices on variables $\mathbf{x}$

${\mathcal S^ y}$

is a subset of the set $\{ 1,\dots ,s\} $ of indices on variables $\mathbf{y}$

A relaxation of the preceding mathematical program can be formed by removing the master constraints, which are defined by the matrices $\mathbf{D}$ and $\mathbf{B}$. The resulting constraint system, defined by the matrix $\mathbf{A}$, forms the subproblem, which can often be solved much more efficiently than the entire original problem. This is the one of the key motivators for using the decomposition algorithm.

The decomposition algorithm works by finding convex combinations of extreme points of the subproblem polyhedron that satisfy the constraints defined in the master. For MILP subproblems, the strength of the relaxation is another important motivator for using this method. If the subproblem polyhedron defines feasible solutions that are close to the original feasible space, the chance of success for the algorithm increases.

The region that defines the subproblem space is often separable. That is, the formulation of the preceding mathematical program can be written in block-angular form as follows:

\[  \begin{array}{rlllllllllll} \mbox{minimize} &  {\mathbf{c}^1} \mathbf{x}^1 &  + &  {\mathbf{c}}^2 \mathbf{x}^2 &  + &  \ldots &  + &  {\mathbf{c}^{\kappa }} \mathbf{x}^{\kappa } &  + &  \mathbf{f}^{\top } y \\ \mbox{subject to} &  \mathbf{D}^1 \mathbf{x}^1 &  + &  \mathbf{D}^2 \mathbf{x}^2 &  + &  \ldots & &  \mathbf{D}^{\kappa } \mathbf{x}^{\kappa } &  + &  \mathbf{B} \mathbf{y} &  \{ \geq , =, \leq \}  &  \mathbf{d} \\ &  \mathbf{A}^1 \mathbf{x}^1 & & & & & & & & &  \{ \geq , =, \leq \}  &  \mathbf{b}^1 \\ & & &  \mathbf{A}^2 \mathbf{x}^2 & & & & & & &  \{ \geq , =, \leq \}  &  \mathbf{b}^2 \\ & & & & &  \ddots & & & & &  \{ \geq , =, \leq \}  &  \vdots \\ & & & & & & &  \mathbf{A}^{\kappa } \mathbf{x}^{\kappa } & & &  \{ \geq , =, \leq \}  &  \mathbf{b}^{\kappa } \\ &  \multicolumn{11}{l}{l_ i^ x \leq x_ i \leq u_ i^ x, \; \;  x_ i \in \mathbb {Z} \; \;  i \in \mathcal{S}^ x} \\ &  \multicolumn{11}{l}{l_ i^ y \leq y_ i \leq u_ i^ y, \; \;  y_ i \in \mathbb {Z} \; \;  i \in \mathcal{S}^ y} \\ \end{array}  \]

where $K = \{ 1,\dots ,\kappa \} $ defines a partition of the constraints (and variables) into independent subproblems (blocks) such that $\mathbf{A} = [\mathbf{A}^1 \dots \mathbf{A}^\kappa ], \mathbf{D} = [\mathbf{D}^1 \dots \mathbf{D}^\kappa ], \mathbf{c} = [\mathbf{c}^1 \dots \mathbf{c}^\kappa ]$, and $\mathbf{x}= [\mathbf{x}^1 \dots \mathbf{x}^\kappa ]$. This type of structure is fairly common in modeling mathematical programs. For example, consider a model that defines a workplace with separate departmental restrictions (defined as the subproblem constraints), which are coupled together by a company-wide budget across departments (defined as the master constraint). By relaxing the budget (master) constraint, the decomposition algorithm can take advantage of the fact that the decoupled subproblems are separable, and it can process them in parallel. A special case of block-angular form, called block-diagonal, occurs when the set of master constraints is empty. In this special case, the subproblem matrices define the entire original problem.

An important indicator of a problem that is well suited for decomposition is the amount by which the subproblems cover the original problem with respect to both variables and constraints in the original presolved model. This value, expressed as a percentage of the original model is known as the coverage. For LPs, the decomposition algorithm usually performs better than standard approaches only if the subproblems cover a significant amount of the original problem. For MILPs, the correlation between performance and coverage is more difficult to determine, because the strength of the subproblem with respect to integrality is not always proportional to the size of the system. Regardless, it is unlikely that the decomposition algorithm outperforms more standard methods (such as branch-and-cut) for problems with small coverage.

The primary input and output for the decomposition algorithm are identical to those needed and produced by the OPTLP, OPTMILP, and OPTMODEL procedures. For more information, see the sections Data Input and Output, Data Input and Output, Details: LP Solver, and Details: MILP Solver. The only additional input that can be provided for the decomposition algorithm is an explicit definition of the partition of the subproblem constraints. The following section gives a simple example of providing this input for both PROC OPTMILP and PROC OPTMODEL.