Configure Jobs and Services

Overview

Jobs and services are configured using the following configuration files, all of which are stored in install-path/etc :
app.cfg
Specifies options that determine how job nodes interface with the resources on the Data Management Server. Options in app.cfg specify how job nodes send e-mail, use a Quality Knowledge Base, and access address verification software. Most of these options are commented-out by default. They are enabled only when your jobs need to use a particular resource.
Real-time data services, real-time process services, batch jobs, and profile jobs are all developed and tested in DataFlux Data Management Studio. When you upload those jobs to DataFlux Data Management Server, the job execution environment has to enable the same configuration options that were used to develop and test those jobs. For this reason, the options that are enabled on the Data Management Server should be similar to the options that are enabled in DataFlux Data Management Studio. Option values differ primarily when they reference storage locations.
For more information about the app.cfg file, see DataFlux Data Management Studio Installation and Configuration Guide.
service.cfg
Specifies options that apply to real-time data services and real-time process services. This file currently supports one option, BASE/LOGCONFIG_PATH, which specifies the path to the log file directory that is used by service jobs.
batch.cfg
Specifies options that apply to batch jobs. This file provides an alternate value for the BASE/LOGCONFIG_PATH option.
macros.cfg
Specifies options (none by default) and macros that apply to all jobs and real-time services. For information about using macros, see Define Macros.
Options are set by order of precedence, starting in the job’s advanced properties. If an option is not specified in the job, then the server checks for a value in macros.cfg, followed by either service.cfg or batch.cfg. If no options are specified, then the default value is retained.

Grant Job Permissions

You can configure user permissions for batch jobs using the administrative interface in DataFlux Data Management Studio. Profile jobs do not have security functions like batch jobs, so they cannot be secured at the job level. You can still grant user permissions for profile jobs at the user or group level.

QKB Memory Usage for Jobs and Services

DataFlux Quality Knowledge Base is loaded into memory in different ways to support batch jobs, profile jobs, real-time data services, and real-time process services. For batch and profile jobs, the definitions in the QKB are loaded into memory individually, as they are needed by the job. The definitions remain in memory until the end of the job. This default behavior can be changed by setting the configuration option QKB/ON_DEMAND=NO in the configuration file app.cfg. Specifying NO loads the entire QKB into memory each time you run a batch job or profile job.
Real-time data services and real-time process services always load the entire QKB into memory for each new service. Similarly, existing services load the entire QKB into memory for each new thread. The QKB remains in memory until the termination of the service or the thread. The loading of the QKB into memory can affect the performance of real-time services. The memory used by the QKB can be a factor in the preloading of real-time services.

Configure Bulk Loading

Bulk loading enhances the performance of jobs that monitor business rules, when those jobs include row-logging events. You can optimize performance for your implementation by changing the number of rows in each bulk load. By default, the number of rows per load is 1000. You can change the default value in the app.cfg option MONITOR/BULK_ROW_SIZE.

Configure Storage for Temporary Jobs

The business rules monitor creates and executes temporary jobs. Those jobs are normally kept in memory and are not stored on disk. When a directory is specified for temporary jobs, the Monitor stores temporary jobs in that location and leaves them in place after the job is complete. To specify a directory for temporary jobs, create the directory and set the path of that directory as the value of the app.cfg option MONITOR/DUMP_JOB_DIR. By default, this option is not set and the Monitor does not store temporary jobs on disk.

Support for Remote-Access Clients

Remote-access clients are published applications (rather than streamed applications) that are run out of software such as Citrix or Microsoft RemoteApps. Your client applications, and DataFlux Data Management Studio, can be run as remote-access clients.
Remote-access clients require additional support to ensure that the cancellation of jobs results in the termination of all remote child processes. To effectively cancel remote processes, set the following option in install-path/etc/app.cfg:
BASE/MAINTAIN_GROUP=YES
If you do not set the MAINTAIN_GROUP option, then the cancellation of jobs can allow child processes to persist on remote-access clients. These rogue processes can become associated with a new group or job.
If you set the MAINTAIN_GROUP, and if remote child processes persist, then you might have to restart the remote-access client to terminate the processes.

Resolve Out-of-Memory Errors When Using Sun JVM

You can encounter out-of-memory errors in jobs with a SOAP Request node or an HTTP Request node, when you are using a Sun Java Virtual Machine (JVM). To resolve this error, add the following option to the Java start command that is specified in the app.cfg file:
 -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled
Last updated: June 16, 2017