Edit SAS Hadoop Configuration Properties File

In the unzipped file structure, you must edit the file Admin/etc/sas_hadoop_config.properties to supply certain information that cannot be obtained automatically. Optional settings also exist that you might want to enable.
For the following section:
hadoop.client.config.filepath=<replace with full path>/User/SASWorkspace/hadoop/conf
hadoop.client.jar.filepath=<replace with full path>/User/SASWorkspace/hadoop/lib
hadoop.client.repository.path=<replace with full path>/User/SASWorkspace/hadoop/repository/
hadoop.client.configfile.repository=<replace with full path>/User/SASWorkspace/hadoop/repository
Replace <replace with full path> with the full path to the location where the ZIP file was unzipped.
For the following section:
hadoop.cluster.manager.hostname=
hadoop.cluster.manager.port=
hadoop.cluster.hivenode.admin.account=
hadoop.cluster.manager.admin.account=
Set hadoop.cluster.manager.hostname to the value of the host where either Cloudera Manager or Ambari is running.
Set hadoop.cluster.manager.port to the value of the port on which Cloudera Manager or Ambari is listening. Default values are provided.
Set hadoop.cluster.hivenode.admin.account to the value of a valid account on the machine on which the Hive2 service is running.
Set hadoop.cluster.manager.admin.account to the value of a valid Cloudera Manager or Ambari account.
For the following section:
hadoop.client.sasconfig.logfile.path=logs
hadoop.client.sasconfig.logfile.name=logs/sashadoopconfig/sashadoopconfig.log
hadoop.client.config.log.level=0
The default values of logs and sashadoopconfig.log create the directory Admin/logs and the filename sashadoopconfig.log, respectively. Both of these values can be changed if you prefer.
You can set the value of hadoop.client.config.log.level to 3 to increase the amount of information logged.
Note: If your distribution is secured with Kerberos,
  • set hadoop.cluster.hivenode.credential.type=kerberos
  • set hadoop.client.config.log.level=3
If you use Cloudera Manager and it manages multiple clusters, provide the name of the cluster to use for the value of hadoop.cluster.manager.clustername=.