SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.*/bin/sasep-servers.sh -stop -hostfile host-list-filename | -host <">host-list<">SASEPHome is the master node where you installed the SAS Embedded Process.
SASEPHome/SAS/SASTKInDatabaseForServerHadoop/9.*/bin/sasep-servers.sh -remove -hostfile host-list-filename | -host <">host-list<"> -mrhome dir
localhost WARN: Apparently, you are trying to uninstall SAS Embedded Process for Hadoop from the local node. The binary files located at local_node/SAS/SASTKInDatabaseServerForHadoop/local_node/ SAS/SASACCESStoHadoopMapReduceJARFiles will not be removed. localhost WARN: The init script will be removed from /etc/init.d and the SAS Map Reduce JAR files will be removed from /usr/lib/hadoop-mapreduce/lib. localhost WARN: The binary files located at local_node/SAS should be removed manually.
/sasep.
/opt or /usr.
This new directory becomes the SAS Embedded Process home and is referred
to as SASEPHome throughout
this chapter.
SAS-installation-directory/SASTKInDatabaseServer/9.4/HadooponLinuxx64/ directory.
scp tkindbsrv-9.42-n_lax.sh username@hadoop:/SASEPHome
SAS-installation-directory/SASACCESStoHadoopMapReduceJARFiles/9.41 directory.
scp hadoopmrjars-9.42-n_lax.sh username@hadoop:/SASEPHome
SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/misc SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/sasexe SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/utilities SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/build
SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin directory should look similar to this.SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sas.ep4hadoop.template SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sasep-servers.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sasep-common.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sasep-server-start.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sasep-server-status.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sasep-server-stop.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/InstallTKIndbsrv.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/MANIFEST.MF SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/qkbpush.sh SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/sas.tools.qkb.hadoop.jar
SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ep-config.xml SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ sas.hadoop.ep.apache023.jar SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ sas.hadoop.ep.apache023.nls.jar SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ sas.hadoop.ep.apache121.jar SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ sas.hadoop.ep.apache121.nls.jar SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ sas.hadoop.ep.apache205.jar SASEPHome/SAS/SASACCESStoHadoopMapReduceJARFiles/9.42/lib/ sas.hadoop.ep.apache205.nls.jar
sasep-servers.sh
-add script to deploy the SAS Embedded Process installation
across all nodes. The SAS Embedded Process is installed as a Linux
service.
sudo su - root su - hdfs | hdfs-userid kinit -kt location of keytab file user for which you are requesting a ticket exit
sudo su - root su - hdfs kinit -kt hdfs.keytab hdfs exit
hdfs.
You can specify a different user ID with the -hdfsuser argument when
you run the sasep-servers.sh -add script.
sasep-servers.sh
-add script.
klist
Ticket cache: FILE/tmp/krb5cc_493
Default principal: hdfs@HOST.COMPANY.COM
Valid starting Expires Service principal
06/20/14 09:51:26 06/27/14 09:51:26 krbtgt/HOST.COMPANY.COM@HOST.COMPANY.COM
renew until 06/22/14 09:51:26Y or y,
the SAS Embedded Process is started on all nodes after the install
is complete. If you choose N or n,
you can start the SAS Embedded Process later by running ./sasep-servers.sh
-start.
sasep-servers.sh -add script,
a user and group named sasep is created.
You can specify a different user and group name with the -epuser and
-epgroup arguments when you run the sasep-servers.sh
-add script.
-host <hosts>
option.
ssh_exchange_identification:
Connection closed by remote host SSHD error might
occur. To work around the problem, edit the /etc/ssh/sshd_config file,
change the MaxStartups option
to the number that accommodates your cluster, and save the file. Then,
reload the SSHD daemon by running the /etc/init.d/sshd reload command.
-status option.cd SASEPHOME/SAS/SASTKInDatabaseServerForHadoop/9.42/bin
./sasep-servers.sh -status/etc/init.d
hadoop fs -ls /sas/ep/config
/sas/ep/config directory
is created automatically when you run the install script.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<!-- lines omitted for sake of brevity -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value></value>
</property>
</configuration>
<property>
<name>sas.ep.superreader.proactive.reader.capacity</name>
<value>10</value>
</property>
sasep-servers.sh
-add script to install the SAS Embedded Process, the
script detects the Hadoop distribution and creates a HADOOP_JARS.zip
file in the SASEPHome/SAS/SASTKInDatabaseServerForHadoop/9.42/bin/
directory. This file contains the common and core Hadoop JAR files
that are required for the SAS Embedded Process. For more information,
see
Installing the SAS Embedded Process and SAS Hadoop MapReduce JAR Files.sasep-servers.sh
-getjars script at any time to create a new ZIP file
and refresh the JAR file list.