Usually, there is no advantage to executing high-performance analytical procedures in asymmetric mode on one appliance, because data might have to be unnecessarily moved between nodes. The following example demonstrates the more typical use of asymmetric mode. In this example, the specified grid host “compute_appliance.sas.com” is a computing appliance that has 15 compute nodes, and it is a different appliance from the 24-node data appliance “data_appliance.sas.com,” which houses the Teradata DBMS where the data reside.
The advantage of using different computing and data appliances is that the data appliance is not affected by the execution of high-performance analytical procedures except during the initial parallel data transfer. A potential disadvantage of this asymmetric mode of execution is that the performance can be limited by the bandwidth with which data can be moved between the appliances. However, because this data movement takes place in parallel from the nodes of the data appliance to the nodes of the computing appliance, this potential performance bottleneck can be overcome with appropriately provisioned hardware. The following statements show how this is done:
proc hplogistic data=dataLib.simData; class a b c; model y = a b c x1 x2 x3; performance host = "compute_appliance.sas.com" gridmode = asym; run;
Figure 3.9 shows the “Performance Information” table.
Figure 3.9: Asymmetric Mode with Distinct Data and Computing Appliances
Performance Information | |
---|---|
Host Node | compute_appliance.sas.com |
Execution Mode | Distributed |
Grid Mode | Asymmetric |
Number of Compute Nodes | 15 |
Number of Threads per Node | 24 |
PROC HPLOGISTIC ran on the 15 nodes of the computing appliance, even though the data are partitioned across the 24 nodes of the data appliance. The numeric results are not reproduced here, but they agree with the previous analyses shown in Figure 3.1 and Figure 3.2.
Every time you run a high-performance analytical procedure in asymmetric mode that uses different computing and data appliances, data are transferred between these appliances. If you plan to make repeated use of the same data, then it might be advantageous to temporarily persist the data that you need on the computing appliance. One way to persist the data is to store them as a table in a SAS LASR Analytic Server that runs on the computing appliance. By running PROC LASR in asymmetric mode, you can load the data in parallel from the data appliance nodes to the nodes on which the LASR Analytic Server runs on the computing appliance. You can then use a LIBNAME statement that associates a SAS libref with tables on the LASR Analytic Server. The following statements show how you do this:
proc lasr port=54321 data=dataLib.simData path="/tmp/"; performance host ="compute_appliance.sas.com" gridmode = asym; run; libname MyLasr sasiola tag="dataLib" port=54321 host="compute_appliance.sas.com" ;
Figure 3.10 show the “Performance Information” table.
Figure 3.10: PROC LASR Running in Asymmetric Mode
Performance Information | |
---|---|
Host Node | compute_appliance.sas.com |
Execution Mode | Distributed |
Grid Mode | Asymmetric |
Number of Compute Nodes | 15 |
PROC LASR ran in asymmetric mode on the computing appliance, which has 15 compute nodes. In this mode, the data are loaded
in parallel from the 24 data appliance nodes to the 15 compute nodes on the computing appliance. By default, all the nodes
on the computing appliance are used. You can use the NODES= option in the PERFORMANCE statement to run the LASR Analytic Server on a subset of the nodes on the computing appliance. If you omit the GRIDMODE=ASYM
option from the PERFORMANCE statement, PROC LASR still runs successfully but much less efficiently. The Teradata access engine transfers the simData
data set to a temporary table on the client, and the High-Performance Analytics infrastructure then transfers these data
from the temporary table on the client to the grid nodes on the computing appliance.
After the data are loaded into a LASR Analytic Server that runs on the computing appliance, you can run high-performance analytical procedures alongside this LASR Analytic Server. Because these procedures run on the same computing appliance where the LASR Analytic Server is running, it is best to run these procedures in symmetric mode, which is the default or can be explicitly specified in the GRIDMODE=SYM option in the PERFORMANCE statement. The following statements provide an example. The OUTPUT statement creates an output data set that is held in memory by the LASR Analytic Server. The data appliance has no role in executing these statements.
proc hplogistic data=MyLasr.simData; class a b c; model y = a b c x1 x2 x3; output out=MyLasr.myOutputData pred=myPred; performance host = "compute_appliance.sas.com"; run;
The following note, which appears in the SAS log, confirms that the output data set is created successfully:
NOTE: The table DATALIB.MYOUTPUTDATA has been added to the LASR Analytic Server with port 54321. The Libname is MYLASR.
You can use the dataLib libref that you used to load the data onto the data appliance to create an output data set on the data appliance. In order for this output to be directly written in parallel from the nodes of the computing appliance to the nodes of the data appliance, you need to run the HPLOGISTIC procedure in asymmetric mode by specifying the GRIDMODE=ASYM option in the PERFORMANCE statement as follows:
proc hplogistic data=MyLasr.simData; class a b c; model y = a b c x1 x2 x3; output out=dataLib.myOutputData pred=myPred; performance host = "compute_appliance.sas.com" gridmode = asym; run;
The following note, which appears in the SAS log, confirms that the output data set is created successfully on the data appliance:
NOTE: The data set DATALIB.myOutputData has 100000 observations and 1 variables.
When you run a high-performance analytical procedure on a computing appliance and either read data from or write data to a different data appliance, it is important to run the high-performance analytical procedures in asymmetric mode so that the Read and Write operations take place in parallel without any movement of data to and from the SAS client. If you omit running the preceding PROC HPLOGISTIC step in asymmetric mode, then the output data set would be created much less efficiently: the output data would be moved sequentially to a temporary table on the client, after which the Teradata access engine sequentially would write this table to the data appliance.
When you no longer need the data in the SAS LASR Analytic Server, you should terminate the server instance as follows:
proc lasr term port=54321; performance host="compute_appliance.sas.com"; run;
If you configured Hadoop on the computing appliance, then you can create output data tables that are stored in the HDFS on the computing appliance. You can do this by using the SASHDAT engine as described in the section Alongside-HDFS Execution.