Managing Process Data

Problem

You want to optimize a process flow that is running too slowly or generating intermediate files that are clogging your file storage system.

Solution

You can perform the following tasks that can help manage process data effectively:

Tasks

Manage Views and Physical Tables

In general, each step in a process flow creates an output table that becomes the input for the next step in the flow. Consider what format is best for transferring data between steps in the flow. There are two choices:
  • Write the output for a step to disk (in the form of SAS data files or RDBMS tables).
  • Create views that process input and pass the output directly to the next step, with the intent of bypassing some writes to disk.
SAS supports two types of views, SQL views and DATA step views. The two types of views can behave differently. Switching from views to physical tables or tables to views sometimes makes little difference in a process flow. At other times, improvements can be significant. The following tips are useful:
  • If the data that is defined by a view is referenced only once in a process flow, then a view is usually appropriate.
  • If the data that is defined by a view is referenced multiple times in a process flow, then putting the data into a physical table will likely improve overall performance. When data is in a view, SAS must execute the underlying code repeatedly each time the view is accessed.
  • If the view is referenced once in a process flow, but the reference is a resource-intensive procedure that performs multiple passes of the input, then consider using a physical table.
  • If the view is SQL and is referenced once, but the reference is another SQL view, then consider using a physical table. SAS SQL optimization can be less effective when views are nested. This is especially true if the steps involve joins or RDBMS sources.
  • If the view is SQL and involves a multi-way join, it is subject to performance limitations and disk space considerations.
Assess the overall impact to your process flow if you make changes based on these tips. In some circumstances, you might find that you have to sacrifice performance in order to conserve disk space.
You can right-click a temporary output table in the Job Editor window to access the Create as View option. Then, you can select and deselect this option to switch between physical tables and views. In this way, you can test the performance of a process flow while you switch between tables and views.
In some cases you can switch the format of a permanent output table between a physical table and a view. You can right-click the permanent output table in the Job Editor window, select Properties, click the Physical Storage tab, and then select or deselect the Create as view option for the table. If the transformation that creates the table can create views, then the table will be created as a view. Some transformations do not support views and might ignore the setting.

Delete Intermediate Files

Transformations in a SAS Data Integration Studio job can produce the following types of intermediate files:
  • procedure utility files that are created by the SORT and SUMMARY procedures when these procedures are used in the transformation
  • transformation temporary files that are created by the transformation as it is working
  • transformation output tables that are created by the transformation when it produces its result; the output for a transformation becomes the input to the next transformation in the flow
By default, procedure utility files, transformation temporary files, and transformation output tables are created in the WORK library. You can use the -WORK invocation option to force all intermediate files to a specified location. You can use the -UTILLOC invocation option to force only utility files to a separate location.
Knowledge of intermediate files helps you to perform the following tasks:
  • View or analyze the output tables for a transformation and verify that the output is correct.
  • Estimate the disk space that is needed for intermediate files.
These intermediate files are usually deleted after they have served their purpose. However, it is possible that some intermediate files might be retained longer than desired in a particular process flow. For example, some user-written transformations might not delete the temporary files that they create.
Utility files are deleted by the SAS procedure that created them. Transformation temporary files are deleted by the transformation that created them. When a SAS Data Integration Studio job is executed in batch, transformation output tables are deleted when the process flow ends or the current server session ends.
When a job is executed interactively in SAS Data Integration Studio, transformation output tables are retained until the Job Editor window is closed or the current server session is ended in some other way (for example, by selecting Actionsthen selectStop from the menu. For information about how transformation output tables can be used to debug the transformations in a job, see Reviewing Temporary Output Tables. However, as long as you keep the job open in the Job Editor window, the output tables remain in the WORK library on the SAS Workspace Server that executed the job. If this is not what you want, you can manually delete the output tables, or you can close the Job Editor window and open it again, which will delete all intermediate files.
Here is a post-processing macro that can be incorporated into a process flow. It uses the DATASETS procedure to delete all data sets in the Work library, including any intermediate files that have been saved to the Work library.
%macro clear_work;
	%local work_members;
	proc sql noprint;
	select memname
	into :work_members separated by ","
	from dictionary.tables
	where
		libname = "WORK" and
		memtype = "DATA";
	quit;
	data _null_;
		work_members = symget("work_members");
		num_members = input(symget("sqlobs"), best.);
		do n = 1 to num_members;
			this_member = scan(work_members, n, ",");
			call symput("member"||trim(left(put(n,best.))),trim(this_member));
		end;
		call symput("num_members", trim(left(put(num_members,best.))));
	run; 
	%if &num_members gt 0 %then %do;
		proc datasets library = work nolist;
			%do n=1 %to &num_members;
				delete &&member&n
			%end;
		quit;
	%end;
%mend clear_work;
%clear_work
Note: The previous macro deletes all data sets in the Work library.
For details about adding a post process to a SAS Data Integration Studio job, see Specifying Options for Jobs.
The transformation output tables for a process flow remain until the SAS session that is associated with the flow is terminated. Analyze the process flow and determine whether there are output tables that are not being used (especially if these tables are large). If so, you can add transformations to the flow that deletes these output tables and free up valuable disk space and memory. For example, you can add a generated transformation that deletes output tables at a certain point in the flow. For details about generated transformations, see Creating and Using a Generated Transformation.

Cleanse and Validate Data

Clean and de-duplicate the incoming data early in the process flow so that extra data that might cause downstream errors in the flow is caught and eliminated quickly. This process can reduce the volume of data that is being sent through the process flow.
To clean the data, consider using the Sort transformation with the NODUPKEY option or the Data Validation transformation. The Data Validation transformation can perform missing-value detection and invalid-value validation in a single pass of the data. It is important to eliminate extra passes over the data, so try to code all of these validations into a single transformation. The Data Validation transformation also provides de-duplication capabilities and error-condition handling. For information, search for data validation in SAS Data Integration Studio Help.

Minimize Remote Data Access

Remote data has to be copied locally because it is not accessible by the relevant components in the default SAS Application Server at the time that the code was generated. SAS uses SAS/CONNECT and the UPLOAD and DOWNLOAD procedures to move data. It can take longer to access remote data than local data, especially when you access large data sets.
For example, data is considered local in a SAS Data Integration Studio job when it is directly accessible from the same machine, from a machine that is directly addressable from the primary machine, or through one of the SAS/ACCESS methods. Otherwise, it is considered remote.
Avoid or minimize remote data access in a process flow. For information about accessing remote data, or executing a job on a remote host, administrators should see “Multi-Tier Environments” in the SAS Data Integration Studio chapter in the SAS Intelligence Platform: Desktop Application Administration Guide
Last updated: January 16, 2018