In SAS Data Loader for
Hadoop, a job is a program that is executed on a specified Hadoop
cluster. A job consists of code that accesses specified data sources
at specified times.
The execution of a job
in a directive follows these steps:
-
Build the job using
the transformation pages in a directive, and then select a target
table.
-
In the
Result page
of the directive, click
Start transforming data.
-
Processing begins in
the vApp, where Hadoop code is generated.
-
The code is submitted
to the Hadoop cluster for execution. The
Result page
displays the Code

and Log

icons.
-
During execution in
Hadoop, the vApp collects status messages that are sent by the Hadoop
cluster.
-
When job execution is
complete, the target table is written to disk, the job completion
time is displayed, and log entries are added to the log file. Click
View Results

to display the target table in the SAS Table Viewer.
In addition to executing
jobs in their directives, you can also execute jobs in the directives
Saved Directives and Run Status.