Processing Large Files of Data

If the file that you want to process contains a large amount of data, you might achieve better performance by dividing that data among several smaller files. Many of the tools that collect raw data have controls that enable the user to split the data among several smaller files. For example, tools that extract SMF data have this capability. For the database-oriented collectors, you can decide to run the staging job more frequently.
SMF data is typically collected and written to an output file by using an IBM utility called IFASMFDP. (This utility is also known as the SMF data set dump program.) It can be used to produce multiple output SMF files. With only a single pass of the input, it can select the data according to SMF record types that are specified in the SMF dump program parameters. For example, you can generate three separate SMF files: one with DB2 data, one with CICS data, and one with everything else. Thus, you could execute three staging jobs concurrently (to read each of the three SMF files) and then execute their associated aggregation jobs.