Prerequisites for Using the SAS Deployment Manager to Deploy the In-Database Deployment Package

The following prerequisites must be met before you can use the SAS Deployment Manager:
  • You must have passwordless SSH access from the master node to the slave nodes.
  • If your cluster is secured with Kerberos, in addition to having a valid ticket on the client, a Kerberos ticket must be valid on node that is running Hive. This is the node that you specify when using the SAS Deployment Manager.
  • If you are using Cloudera, the SSH account must have Write permission to these directories:
    /opt/cloudera
    /opt/cloudera/csd
    /opt/cloudera/parcels
  • You cannot customize the install location of the SAS Embedded Process on the cluster. By default, the SAS Deployment Manager deploys the SAS Embedded Process in the /opt/cloudera/parcels directory for Cloudera and the /opt/sasep_stack directory for Hortonworks, IBM BigInsights, and Pivotal HD.
  • If you are using Cloudera, the Java JAR and GZIP commands must be available.
  • If you are using Hortonworks, the requiretty option is enabled, and the SAS Embedded Process is installed using the SAS Deployment Manager, the Ambari server must be restarted after deployment. Otherwise, the SASEP service does not appear in the Ambari list of services. It is recommended that you disable the requiretty option until the deployment is complete.
  • The following information is required:
    • host name and port of the cluster manager
    • credentials (account name and password) for the Hadoop cluster manager
    • Hive service host name
    • Oozie service host name (if required by your software)
    • Impala service host name (if required by your software)
    • credentials of the UNIX user account with SSH for the Hadoop cluster manager
Last updated: February 9, 2017