hadoop/lib
directory
(for backward compatibility only).
installs the SAS Embedded Process.
Requirement | If you have sudo access, the script automatically retrieves the list of data nodes from the Hadoop configuration. If you do not have sudo access, you must use the -x argument and either the -hostfile or -host argument. |
Tip | If you add nodes to the cluster, you can specify the hosts on which you want to install the SAS Embedded Process by using the -hostfile or -host option. The -hostfile or -host options are mutually exclusive. |
See | -hostfile and -host option |
generates a new SAS
Embedded Process configuration file in the EPInstallDir/SASEPHome/conf
directory
of the local file system.
Requirement | If you do not have sudo access, you must use the -x argument. |
Interactions | When used without the -x argument, the script creates the ep-config.xml configuration file and writes it to both the EPInstallDir/SASEPHome/conf directory on the local file system and the /sas/ep/config/ directory on HDFS. You can change the filename and HDFS location by using the HDFS-filename argument. HDFS-filename must be the fully qualified HDFS pathname where the configuration file is located. |
When used with the -x argument, the script does not write the configuration file to HDFS. You must manually copy the file to HDFS. | |
Note | The -genconfig argument creates two identical configuration files under EPInstallDir/SASEPHome/conf/ on the local file system: ep-config.xml and sasep-site.xml. The sasep-site.xml file might be copied to the client side under a folder that is in the classpath. When the sasep-site.xml file is loaded from the classpath, the configuration file on the HDFS location is not used. However, if sasep-site.xml is not found in the classpath, a configuration file must exist on HDFS, either on the default HDFS location /sas/ep/config/ep-config.xml or in the location that is set in the sas.ep.config.file property. |
Tips | Use the -genconfig argument to generate a new SAS Embedded Process configuration file if you upgrade your Hadoop installation, you install or upgrade your Hive or HCatalog services, or you upgrade the JDK or JRE that is used by the Hadoop processes. |
This argument generates an updated ep-config.xml file. Use the -force argument to overwrite the existing configuration file. | |
Use the HDFS-filename argument to specify another location and configuration filename. If you decide to generate the configuration file in a non-default HDFS location, you must set the sas.ep.config.file property in the mapred-site.xml file to the value that you specify in the -genconfig option. | |
See | -epconfig config-filename |
distributes a hot fix package.
Requirements | Hot fixes must be installed using the same user ID who performed the initial software installation. |
Hot fixes should be installed following the installation instructions provided by SAS Technical Support. |
removes the SAS Embedded Process.
Requirement | If you do not have sudo access, you must use the -x argument and either the -hostfile or -host argument. The -hostfile or -host options are mutually exclusive. |
Interactions | When used without the -x argument and you have sudo access, the script automatically retrieves the list of data nodes from the Hadoop configuration. In addition, the script automatically removes the epconfig.xml file from HDFS. |
When used with the -x argument, the SAS Embedded Process is removed from all hosts that you specify. However, the ep-config.xml file must be removed manually from HDFS. | |
See | -hostfile and -host option |
creates SAS Hadoop
MapReduce JAR file symbolic links in the hadoop/lib
folder.
Restriction | This argument should be used only for backward compatibility (that is, when you install the July 2015 release of SAS 9.4 of the SAS Embedded Process on a client that runs the second maintenance release of SAS 9.4). |
Requirement | If you use this argument, you must restart the MapReduce service, the YARN service, or both after the SAS Embedded Process is installed. |
Interaction | Use the -linklib argument after the SAS Embedded Process is already installed to create the symbolic links. Use the -link argument in conjunction with the -add argument to force the creation of the symbolic links. |
See | Backward Compatibility |
-link |
removes SAS Hadoop
MapReduce JAR file symbolic links in the hadoop/lib
folder.
Restriction | This argument should be used only for backward compatibility (that is, when you install the July 2015 release of SAS 9.4 of the SAS Embedded Process on a client that runs the second maintenance release of SAS 9.4). |
Requirement | If you use this argument, you must restart the MapReduce service, the YARN service, or both after the SAS Embedded Process is installed. |
See | Backward Compatibility |
checks whether the SAS Embedded Process is installed correctly on all data nodes.
Requirement | If you ran the sasep-admin.sh script with the -x argument, you must specify the hosts for which you want to check the SAS Embedded Process by using the -hostfile or -host option. The -hostfile or -host options are mutually exclusive. |
See | -hostfile and -host option |
displays the SAS Embedded Process install script and the Hadoop configuration environment.
displays the Hadoop version information for the cluster.
displays all live DataNodes on the cluster.
Requirement | sudo access is required. |
displays the version of the SAS Embedded Process that is installed.
if you do not have sudo access, runs the script solely under the current user’s credential.
Requirements | This option must be the first argument passed to the script. |
A list of hosts must be provided with either the -hostfile or -host argument. | |
If you do not have sudo access, you must use the -x argument. | |
Interaction | If you use the -x argument to install the SAS Embedded Process, that is, with the -add argument, you must use the -x argument in any other sasep-admin.sh script action that supports it. |
See | -hostfile and -host option |
forces the creation
of SAS Hadoop MapReduce JAR files symbolic links in the hadoop/lib
folder
during the installation of the SAS Embedded Process.
Restriction | This argument should be used only for backward compatibility (that is, when you install the July 2015 release of SAS 9.4 of the SAS Embedded Process on a client that runs the second maintenance release of SAS 9.4). |
Requirement | If you use this argument, you must restart the MapReduce service, the YARN service, or both after the SAS Embedded Process is installed. |
Interactions | Use this argument in conjunction with the -add argument to force the creation of the symbolic links. |
Use the -linklib argument after the SAS Embedded Process is already installed to create the symbolic links. | |
See | Backward Compatibility |
-linklib |
generates the SAS Embedded Process configuration file in the specified location.
Default | If the -epconfig argument is not specified, the install
script creates the SAS Embedded Process configuration file in the
default location /sas/ep/config/ep-config.xml .
|
Requirement | If the -epconfig argument is not specified a configuration file location must be provided. If you choose a non-default location, you must set the sas.ep.config.file property in the mapred-site.xml file that is on your client machine to the non-default location. |
Interaction | Use the -epconfig argument in conjunction with the -add or -remove argument to specify the HDFS location of the configuration file. |
specifies the maximum number of parallel copies between the master and data nodes.
Default | 10 |
Interaction | Use this argument in conjunction with the -add or -hotfix argument. |
specifies the full path of a file that contains the list of hosts where the SAS Embedded Process is installed or removed.
Requirement | The -hostfile or -host argument is required if you do not have sudo access. |
Interaction | Use the -hostfile argument in conjunction with the -add, -hotfix, -check, or -remove arguments. |
See | -hdfsuser user-id |
Example | -hostfile |
specifies the target host or host list where the SAS Embedded Process is installed or removed.
Requirements | If you specify more than one host, the hosts must be enclosed in double quotation marks and separated by spaces or commas. |
The -host or -hostfile argument is required if you do not have sudo access. | |
Interaction | Use the -hostfile argument in conjunction with the -add, -hotfix, -check, or -remove arguments. |
See | -hdfsuser user-id |
Example | -host "server1 server2 server3" -host bluesvr -host "blue1, blue2, blue2" |
specifies the user ID that has Write access to HDFS root directory.
Default | hdfs for Cloudera, Hortonworks, Pivotal HD, and IBM BigInsights |
mapr for MapR | |
Interactions | This argument has no affect if you use the -x argument. |
Use the -hdfsuser argument in conjunction with the -add, -check, or -remove argument to change, check, or remove the HDFS user ID. | |
Note | The user ID is used to copy the SAS Embedded Process configuration files to HDFS. |