Overview of Jobs and Services

Types of Jobs and Services

The types of jobs and services that you can store and run on a DataFlux Data Management Server include real-time data services, real-time process services, batch jobs, and profile jobs.
Real-Time Services Data Services
Real-time data services are designed to quickly respond to a request from a client application. Real-time data services can process a small amount of data input from a client, or it can retrieve and deliver a small amount of data from a database. Real-time data services are executed by the DFWSFC process, as defined in Manage the DFWSVC Process.
Real-Time Process Services
Real-time process services accept input parameters only from clients, to trigger events or change a display. Real-time process services are executed the DFWFPROC process, which runs a WorkFlow Engine (WFE), as defined in Manage the DFWFPROC Process Services.
About Real-Time Data Services and Real-Time Process Services
If a real-time data service or real-time process service fails to terminate normally, then the service is terminated when the client connection times-out.
To maximize performance, logging is not enabled for real-time services. To activate logging for debugging purposes, see Administer Data Service Log Files.
Real-time services are stored in install-path\ var\data_services | process_services.
Batch Jobs
Batch jobs are designed to be run at specified times to collect data and generate reports. Batch jobs are not intended to provide real-time responses to client requests.
All batch jobs are logged in dmserver.log. For more information, see Administer Log Files for Batch and Profile Jobs.
Batch jobs are stored in install-path\ var\batch_jobs.
Batch jobs, like real-time process services, are run by the DFWFPROC process. You can pass input parameters into batch jobs, but not any actual data.
Profile Jobs
Profile jobs are designed to analyze the quality of specified data sets. Profile jobs are handled as repository objects. They are required to reside in the Data Management Repository. When you run a profile job, the server finds the job in the repository and then starts a new instance of the DFWFPROC process. The requested profile is then run by ProfileExec.djf, which resides in the same directory as the repository. For more information about the Data Management Repository, see About the Repository.
Unlike batch jobs, you cannot grant unique user permissions for profile jobs since they do not have associated object-level access control. To learn more about permissions, see Manage Permissions.
When you install a new version of the DataFlux Data Management Server, you are required to import all of your profile jobs into a new Data Management Repository. For more information about importing profile jobs, see Post-Installation Tasks.

Usage Notes for Jobs and Services

In the Windows operating environment, it is possible to save a DataFlux Data Management Server job to a directory, and then not be able to see that job in that directory. To resolve this issue, save your jobs in a location that does not use mapped drives. A Windows service is not able to access mapped drives, even if the service is started under a user account that maps those drives.
DataFlux Data Management Server cannot run a remote job or service whose location is specified with a mapped drive letter, such as Z:\path\remote_job.ddf. The server runs as a service under a SYSTEM account and no mapped drives are available to such an account. Use a UNC path to specify the location of a remote job or service, such as \\ServerHostName.MyDomain.com\path\remote_job.ddf.
The following restrictions apply to the name of a job that will be deployed to DataFlux Data Management Server. You should follow these restrictions for all jobs. A job name can contain any alpha-numeric characters, white spaces, and any characters from the following list:
,.'[]{}()+=_-^%$@!~/\
The maximum length of job names is 8,192 bytes. DataFlux Data Management Server will not upload, list, or run a job name with characters other than those cited above.
In UNIX or Linux, to run a shell command in a job, use the execute() function, as shown in the following examples. To run the command directly:
execute("/bin/chmod", "777", "file.txt")
To run a command through a shell:
execute("/bin/sh", "-c", "chmod 777 file.txt")
The preceding examples return the host authorizations for a text file.
Last updated: June 16, 2017