The qkb_push.sh file
is created in the EPInstallDir/SASEPHome/bin
directory
by the SAS Data Quality Accelerator install script (sepdqacchadp).
You must execute qkb_push.sh from this directory.
By default, qkb_push.sh
automatically discovers all nodes in the cluster and deploys the specified
QKB on them. The script also generates an index file from the contents
of the QKB and pushes this index file to HDFS.
Flags are provided to
enable you to deploy the QKB to specific nodes or a group of nodes.
If you are expanding your Hadoop cluster by adding new nodes after
the initial deployment, you might want to use one of these flags to
deploy the QKB to those nodes and avoid redeploying to the entire
cluster. Flags are also available to enable you to suppress index
creation or to perform only index creation. If users have a problem
viewing QKB definitions from within Data Loader, you might want to
re-create the index file.
Note: Only one QKB and one index
file are supported in the Hadoop framework at a time. For example,
you cannot have a QKB for Contact Information and a QKB for Product
Data in the Hadoop framework at the same time. Subsequent QKB and
index pushes replace prior ones, unless you are pushing a QKB that
is of an earlier version than the one installed or has a different
name. In these cases, you must remove the old QKB from the cluster
before deploying the new one.
For more
information, see Removing the QKB from the Hadoop Cluster.
Run qkb_push.sh as the
root user. It becomes the HDFS user or MAPR user, as appropriate,
in order to detect the nodes in the cluster. A flag is available to
specify the HDFS user name if a name other than the default was configured.
To simplify maintenance,
the source QKB directory is copied to a fixed location (/opt/qkb/default
)
on each node. The QKB index file is created in the /sas/qkb
directory
in HDFS. If a QKB or QKB index file already exists in the target location,
the new QKB or QKB index file overwrites it.