This section contains information useful when you want to solve large constrained network problems. However, much of this information is also useful if you have a large linear programming problem. All of the options described in this section that are not directly applicable to networks (options such as ARCS_ONLY_ARCDATA , ARC_SINGLE_OBS , NNODES= , and NARCS= ) can be specified to improve the speed at which LP data are read.
Many of the models presented to PROC NETFLOW are enormous. They can be considered large by linear programming standards; problems with thousands of variables and constraints. When dealing with side constrained network programming problems, models can have not only a linear programming component of that magnitude, but also a larger, possibly much larger, network component.
The majority of a network problem’s decision variables are arcs. Like an LP decision variable, an arc has an objective function coefficient, upper and lower value bounds, and a name. Arcs can have coefficients in constraints. Therefore, an arc is quite similar to an LP variable and places the same memory demands on optimization software as an LP variable. But a typical network model has many more arcs and nonarc variables than the typical LP model has variables. And arcs have tail and head nodes. Storing and processing node names require huge amounts of memory. To make matters worse, node names occupy memory at times when a large amount of other data should also reside in memory.
While memory requirements are lower for a model with embedded network component compared with the equivalent LP once optimization starts, the same is usually not true during the data read. Even though nodal flow conservation constraints in the LP should not be specified in the constrained network formulation, the memory requirements to read the latter are greater because each arc (unlike an LP variable) originates at one node, and is directed toward another.
PROC NETFLOW has facilities to read data when the available memory is insufficient to store all the data at once. PROC NETFLOW does this by allocating memory for different purposes, for example, to store an array or receive data read from an input SAS data set. After that memory has filled, the information is sent to disk and PROC NETFLOW can resume filling that memory with new information. Often, information must be retrieved from disk so that data previously read can be examined or checked for consistency. Sometimes, to prevent any data from being lost, or to retain any changes made to the information in memory, the contents of the memory must be sent to disk before other information can take its place. This process of swapping information to and from disk is called paging. Paging can be very time-consuming, so it is crucial to minimize the amount of paging performed.
There are several steps you can take to make PROC NETFLOW read the data of network and linear programming models more efficiently, particularly when memory is scarce and the amount of paging must be reduced. PROC NETFLOW will then be able to tackle large problems in what can be considered reasonable amounts of time.
PROC NETFLOW is quite flexible in the ways data can be supplied to it. Data can be given by any reasonable means. PROC NETFLOW has convenient defaults that can save you work when generating the data. There can be several ways to supply the same piece of data, and some pieces of data can be given more than once. PROC NETFLOW reads everything, then merges it all together. However, this flexibility and convenience come at a price; PROC NETFLOW may not assume the data has a characteristic that, if possessed by the data, could save time and memory during the data read. There are several options that indicate the data has some exploitable characteristic.
For example, an arc cost can be specified once or several times in the ARCDATA= or CONDATA= data set, or both. Every time it is given in ARCDATA , a check is made to ensure that the new value is the same as any corresponding value read in a previous observation of ARCDATA . Every time it is given in CONDATA , a check is made to ensure that the new value is the same as the value read in a previous observation of CONDATA , or previously in ARCDATA . It would save PROC NETFLOW time if it knew that arc cost data would be encountered only once while reading ARCDATA , so performing the time-consuming check for consistency would not be necessary. Also, if you indicate that CONDATA contains data for constraints only, PROC NETFLOW will not expect any arc information, so memory will not be allocated to receive such data while reading CONDATA . This memory is used for other purposes and this might lead to a reduction in paging. If applicable, use the ARC_SINGLE_OBS or the CON_SINGLE_OBS option, or both, and the NON_REPLIC= COEFS specification to improve how ARCDATA and CONDATA are read.
PROC NETFLOW allows the observations in input data sets to be in any order. However, major time savings can result if you are prepared to order observations in particular ways. Time spent by the SORT procedure to sort the input data sets, particularly the CONDATA= data set, may be more than made up for when PROC NETFLOW reads them, because PROC NETFLOW has in memory information possibly used when the previous observation was read. PROC NETFLOW can assume a piece of data is either similar to that of the last observation read or is new. In the first case, valuable information such as an arc or a nonarc variable number or a constraint number is retained from the previous observation. In the last case, checking the data with what has been read previously is not necessary.
Even if you do not sort the CONDATA= data set, grouping observations that contain data for the same arc or nonarc variable or the same row pays off. PROC NETFLOW establishes whether an observation being read is similar to the observation just read.
Practically, several input data sets for PROC NETFLOW might have this characteristic, because it is natural for data for each constraint to be grouped together ( dense format of CONDATA ) or data for each column to be grouped together ( sparse format of CONDATA ). If data for each arc or nonarc is spread over more than one observation of the ARCDATA= data set, it is natural to group these observations together.
Use the GROUPED= option to indicate whether observations of the ARCDATA= data set, CONDATA= data set, or both are grouped in a way that can be exploited during data read.
Time is saved if the type data for each row appears near the top of the CONDATA= data set, especially if it has the sparse format. Otherwise, when reading an observation, if PROC NETFLOW does not know if a row is a constraint or special row, the data are set aside. Once the data set has been completely read, PROC NETFLOW must reprocess the data it set aside. By then, it knows the type of each constraint or row or, if its type was not provided, it is assumed to have a default type.
In order for PROC NETFLOW to make better utilization of available memory, you can now specify options that indicate the approximate size of the model. PROC NETFLOW then knows what to expect. For example, if you indicate that the problem has no nonarc variables, PROC NETFLOW will not allocate memory to store nonarc data. That memory is utilized better for other purposes. Memory is often allocated to receive or store data of some type. If you indicate that the model does not have much data of a particular type, the memory that would otherwise have been allocated to receive or store that data can be used to receive or store data of another type.
These options will sometimes be referred to as Nxxxx= options.
You do not need to specify all these options for the model, but the more you do, the better. If you do not specify some or all of these options, PROC NETFLOW guesses the size of the problem by using what it already knows about the model. Sometimes PROC NETFLOW guesses the size of the model by looking at the number of observations in the ARCDATA= and CONDATA= data sets. However, PROC NETFLOW uses rough rules of thumb; that typical models are proportioned in certain ways (for example, if there are constraints, then arcs and nonarcs usually have 5 constraint coefficients). If your model has an unusual shape or structure, you are encouraged to use these options.
If you do use the options and you do not know the exact values to specify, overestimate the values. For example, if you specify NARCS= 10000 but the model has 10100 arcs, when dealing with the last 100 arcs, PROC NETFLOW might have to page out data for 10000 arcs each time one of the last arcs must be dealt with. Memory could have been allocated for all 10100 arcs without affecting (much) the rest of the data read, so NARCS= 10000 could be more of a hindrance than a help.
The point of these Nxxxx= options is to indicate the model size when PROC NETFLOW does not know it. When PROC NETFLOW knows the "real" value, that value is used instead of Nxxxx= .
When PROC NETFLOW is given a constrained solution warm start, PROC NETFLOW knows from the warm start information all model size parameters, so Nxxxx= options are not used. When an unconstrained warm start is used and the SAME_NONARC_DATA is specified, PROC NETFLOW knows the number of nonarc variables, so that is used instead of the value of the NNAS= option.
ARCS_ONLY_ARCDATA indicates that data for only arcs are in the ARCDATA= data set. Memory would not be wasted to receive data for nonarc and LP variables.
Use the memory usage parameters:
The BYTES= option specifies the size of PROC NETFLOW main working memory in number of bytes.
The MAXARRAYBYTES= option specifies the maximum number of bytes that an array can occupy.
The MEMREP option indicates that memory usage report is to be displayed on the SAS log.
Specifying the BYTES= parameter is particularly important. Specify as large a number as possible, but not such a large number of bytes that will cause PROC NETFLOW (rather, the SAS System running underneath PROC NETFLOW) to run out of memory. Use the MAXARRAYBYTES= option if the model is very large or "disproportionate." Try increasing or decreasing the MAXARRAYBYTES= option. Limiting the amount of memory for use by big arrays is good if they would take up too much memory to the detriment of smaller arrays, buffers, and other things that require memory. However, too small a value of the MAXARRAYBYTES= option might cause PROC NETFLOW to page a big array excessively. Never specify a value for the MAXARRAYBYTES= option that is smaller than the main node length array. PROC NETFLOW reports the size of this array on the SAS log if you specify the MEMREP option. The MAXARRAYBYTES= option influences paging not only in the data read, but also during optimization. It is often better if optimization is performed as fast as possible, even if the read is made slower as a consequence.
Use as much as possible the parameters that specify default values. For example, if there are several arcs with the same cost value c, use DEFCOST= c for arcs that have that cost. Use missing values in the COST variable in ARCDATA instead of c. PROC NETFLOW ignores missing values, but must read, store, and process nonmissing values, even if they are equal to a default option or could have been equal to a default parameter had it been specified. Sometimes, using default parameters makes the need for some SAS variables in the ARCDATA= and CONDATA= data sets no longer necessary, or reduces the quantity of data that must be read. The default options are
DEFCOST= default cost of arcs, objective function of nonarc variables or LP variables
DEFMINFLOW= default lower flow bound of arcs, lower bound of nonarc variables or LP variables
DEFCAPACITY= default capacity of arcs, upper bound of nonarc variables or LP variables
DEFCONTYPE= LE DEFCONTYPE= <= DEFCONTYPE= EQ DEFCONTYPE= = DEFCONTYPE= GE DEFCONTYPE= >= (default constraint type)
The default options themselves have defaults. For example, you do not need to specify DEFCOST= 0 in the PROC NETFLOW statement. You should still have missing values in the COST variable in ARCDATA for arcs that have zero costs.
If the network has only one supply node, one demand node, or both, use
Do not specify that a constraint has zero right-hand-side values. That is the default. The only time it might be practical to specify a zero rhs is in observations of CONDATA read early so that PROC NETFLOW can infer that a row is a constraint. This could prevent coefficient data from being put aside because PROC NETFLOW did not know the row was a constraint.
To cut data read time and memory requirements, reduce the number of bytes in the longest node name, longest arc name, and longest constraint name to 8 bytes or less. The longer a name, the more bytes must be stored and compared with other names.
If an arc has no constraint coefficients, do not give it a name in the NAME list variable in the ARCDATA= data set. Names for such arcs serve no purpose.
PROC NETFLOW can have a default name for each arc. If an arc is directed from node tailname toward node headname, the default name for that arc is tailname_headname. If you do not want PROC NETFLOW to use these default arc names, specify NAMECTRL= 1. Otherwise, PROC NETFLOW must use memory for storing node names and these node names must be searched often.
If you want to use the default tailname_headname name, that is, NAMECTRL=
2 or NAMECTRL=
3, do not use underscores in node names. If a CONDATA
has a dense
format and has a variable in the VAR
list A_B_C_D
, or if the value A_B_C_D
is encountered as a value of the COLUMN
list variable when reading CONDATA
that has the sparse
format, PROC NETFLOW first looks for a node named A. If it finds it, it looks for a node called B_C_D
. It then looks for a node with the name A_B
and possibly a node with name C_D
. A search for a node named A_B_C
and possibly a node named D is done. Underscores could have caused PROC NETFLOW to look unnecessarily for nonexistent nodes.
Searching for node names can be expensive, and the amount of memory to store node names large. It might be better to assign
the arc name A_B_C_D
directly to an arc by having that value as a NAME
list variable value for that arc in ARCDATA
and specify NAMECTRL=
1.
Use warm starts as much as possible.
WARM indicates that the input SAS data sets contain a warm start.
The data read of a warm start is much faster than a cold start data read. The model size is known before the read starts. The observations of the NODEDATA= or DUALIN= data sets have observations ordered by node name and constraint name. Information is stored directly in the data structures used by PROC NETFLOW. For a cold start, much of preprocessing must be performed before the information can be stored in the same way. And using a warm start can greatly reduce the time PROC NETFLOW spends doing optimization.
SAME_NONARC_DATA is an option that excludes data from processing.
This option indicates that the warm start nonarc variable data in ARCDATA is read and any nonarc variable data in CONDATA is to be ignored. Use this option if it is applicable, or when CONDATA has no nonarc variable data, or such data are duplicated in ARCDATA . ARCDATA is always read before CONDATA .
Arcs and nonarc variables can have associated with them values or quantities that have no bearing with the optimization. This information is given in ARCDATA in the ID list variables. For example, in a distribution problem, information such as truck number and driver’s name can be associated with each arc. This is useful when a solution is saved in an output SAS data set. However, PROC NETFLOW needs to reserve memory to process this information when data are being read. For large problems when memory is scarce, it might be better to remove ancillary data from ARCDATA . After PROC NETFLOW runs, use SAS software to merge this information into the output data sets that contain the optimal solution.