The
I/O block size determines the amount of data that is physically transferred together
in an
I/O
operation. The larger the block size, the less
I/O. The
SPD Engine uses blocks in memory to collect the observations to be written to or read from a
data component file. The IOBLOCKSIZE= data set option specifies the size of the block.
(The actual size is computed to accommodate the largest number of observations that
fit in the specified size of
n bytes. Therefore, the
actual size is a multiple of the observation length.)
The block size affects
I/O operations for compressed, uncompressed, and encrypted data sets.
However, the effects are different and depend on the
I/O operation.
-
For a compressed data set, the
block size determines how many observations are compressed together,
which determines the amount of data that is physically transferred
for both Read and Write operations. The block size is a permanent
attribute of the file. To specify a different block size, you must
copy the data set to a new data set, and then specify a new block
size for the output file. For a compressed data set, a larger block
size can improve performance for both Read and Write operations.
-
For an encrypted data set, the block size is a permanent attribute of the file.
-
For an uncompressed data set, the block size determines the size of the blocks that
are used to read the data from disk to
memory. The block size has no affect when writing data to disk. For an uncompressed
data set, the block size is not a permanent attribute of the file. That is, you can
specify a different block size based on the Read operation that you are performing.
For example, reading data that is randomly distributed or reading a subset of the
data calls for a smaller block size because accessing smaller blocks is faster than
accessing larger blocks. In contrast, reading data that is uniformly or sequentially
distributed or that requires a full data set scan works better with a larger block
size.