You can use data jobs
and process jobs to perform data integration tasks in DataFlux Data
Management Studio. For example, the data inputs data job node category
contains nodes that tasks such as running SQL queries, extracting
table metadata, and processing XML input. Similarly, the data output
nodes support tasks such as deleting records and generating an HTML-formatted
report from the results of a data job. They can also produce a report
that lists the duplicate records identified with match criteria that
you have specified.
The nodes in the data
integration category support a range of tasks. These tasks include
sorting and joining your data, combining the data from two data sets,
and SQL lookup and execution. You can also use data integration nodes
to issue SOAP and HTTP requests.
You can use the process
job SQL nodes to perform tasks such as running SQL in parallel and
managing custom SQL scripts. You can also write your own SQL and create
or insert content into tables. The process job data integration nodes
are useful when you need to write some code into the node or point
to a file that contains some SAS code. For example, the SAS
Code Reference process job node enables you to point
to a SAS code file on the file system or in a DataFlux repository.
You can then execute that code as part of a process job. A Data Integration
license is required to get these nodes.