You can access data implicitly in the context of a job.
When code is generated for a job, it is generated in the current context.
The context includes the default SAS Application Server when the code
was generated, the credentials of the person who generated the code,
and other information. The context of a job affects the way that data
is accessed when the job is executed.
In order
to access data in the context of a job, you need to understand the
distinction between local data and remote data. Local data is addressable
by the SAS Application Server when code is generated for the job.
Remote data is not addressable by the SAS Application Server when
code is generated for the job.
For example,
the following data is considered local in the context of a job:
-
data that can be accessed as if
it were on one or more of the same computers as the SAS Workspace
Server components of the default SAS Application Server
-
data that is accessed with a
SAS/ACCESS
engine (used by the default SAS Application Server)
The following
data is considered remote in a SAS Data Integration Studio job:
-
data that cannot be accessed as
if it were on one or more of the same computers as the SAS Workspace
Server components of the default SAS Application Server
-
data that exists in a different
operating environment from the SAS Workspace Server components of
the default SAS Application Server (such as MVS data that is accessed
by servers running under Microsoft Windows)
Note: Avoid or minimize
remote data access in the context of a SAS Data Integration Studio
job.
Remote
data has to be moved because it is not addressable by the relevant
components in the default SAS Application Server at the time that
the code was generated. SAS Data Integration Studio uses
SAS/CONNECT
and the UPLOAD and DOWNLOAD procedures to move data. Accordingly,
it can take longer to access remote data than local data, especially
for large data sets. It is especially important to understand where
the data is located when using advanced techniques such as parallel
processing because the UPLOAD and DOWNLOAD procedures run in each
iteration of the parallel process.
For information
about accessing remote data in the context of a job, administrators
should see the section on "Multi-Tier Environments" in the "SAS Data
Integration Studio" chapter of the
SAS Intelligence Platform:
Desktop Application Administration Guide. Administrators
should also see
Using Deploy for Scheduling to Execute Jobs on a Remote Host. For details about the code that is generated for local and remote
jobs, see the subheadings about LIBNAME statements and remote connection
statements in
Common Code Generated for a Job.