About the Execution of Jobs

In SAS Data Loader for Hadoop, a job is a program that is executed on a specified Hadoop cluster. A job consists of code that accesses specified data sources at specified times.
The execution of a job in a directive follows these steps:
  1. Build the job using the transformation pages in a directive, and then select a target table.
  2. In the Result page of the directive, click Start transforming data.
  3. Processing begins in the vApp, where Hadoop code is generated.
  4. The code is submitted to the Hadoop cluster for execution. The Result page displays the Code Code icon and Log Log icon icons.
  5. During execution in Hadoop, the vApp collects status messages that are sent by the Hadoop cluster.
  6. When job execution is complete, the target table is written to disk, the job completion time is displayed, and log entries are added to the log file. Click View Results View Results Icon to display the target table in the SAS Table Viewer.
In addition to executing jobs in their directives, you can also execute jobs in the directives Saved Directives and Run Status.