DataFlux Data Management Studio 2.6: User Guide
While process job nodes can consume and produce data through work tables, they are not designed to do large data set transformations. They are mainly used to launch processes and make decisions. They execute based on input parameters, produce output parameters, and logically decide which node to execute next.
A process job combines data processing with conditional processing. The process flow in the job supports logical decisions, looping, events and other features that are not available in a data job flow. Data job nodes can be added to a process flow to encapsulate all of the data processing power of a data job into a node in the process flow.
Process job nodes are divided into the following categories:
The tables below give brief descriptions of each node. To display the online Help for each process job node, click on the node name in the tables below. You may also open a process job in the process job editor, select a node in the Nodes tree, and then click the Help link in the pane at the bottom of the Nodes tree.
You can use data job nodes to encapsulate data processing flows in process jobs. Data jobs are the main way to process data in DataFlux Data Management Studio.
Name | Description |
---|---|
Data Job Node | Encapsulates a data job within a process job. Enables you to add a new set of data-processing operations that are appropriate for the current process job. For an example of how to use this node, see Creating a Process Job. |
Data Job (reference) | Points to a data job file (*.dds file) on the file system. Used to include a data job within a process job. Enables you to point to an existing data job that has a set of data-processing operations that are appropriate for the current process job. |
You can use these address update nodes to perform various tasks that are related to the National Change of Address (NCOA) service, which makes address update information available to mailers to help reduce undeliverable mail.
Name | Description |
---|---|
Address Update Audit Report | Enables you to run an audit report against the NCOALink® process. The software needs to be able to generate this audit report for certification and auditing purposes. For more information, see Using the Address Update Add-On. |
Address Update Monthly Reports | Enables you to run a series of required monthly reports against the NCOALink® process. These reports detail each mailing list that is processed and who it is processed for. For more information, see Using the Address Update Add-On. |
Address Update Process Summary Report | Enables you to run a subset of the monthly customer service log report against the NCOALink® process that is presented in a simple text format. For more information, see Using the Address Update Add-On. |
You can use the utilities nodes to help you manage your processes. For example, the event node can listen for events, and the work table reader can read work tables generated by other flows. These nodes all work on process-related data or can handle or react to process information.
Name | Description |
---|---|
Echo | Echoes the input to the output so that the node can collect one or more outputs from other nodes into their inputs. Then, you can use their outputs as a single binding point. For an example of how to use this node, see Creating a Process Job. |
Fork | Enables you to launch multiple processes that run in parallel within a single process in a process job. |
Parallel Iterator | Enables you to launch a single process that runs in parallel a specified number of times. |
Process Flow Work Table Reader | Reads work tables and publishes the data to the output. For an example of how to use this node, see Creating a Process Job. |
Process Job Reference | Adds a reference to a process job to another process job. The data in the referenced process job is available to other nodes in the current process job. |
Event Listen | Enables you to specify nodes and events to be monitored in a process job. If a specified event is found, then the Event Listen node executes any downstream node to which it is connected. |
Expression | Provides an Expression properties tab that you can use to create an expression. For an example of how to use this node, see Deploying a Process Job as a Real-Time Service. |
Global Get/Set | Reads a global job variable and enables you to set the variable to another value. In a process job, the Global Get/Set node is the only way to get string variables for nested jobs and set string variables for nested jobs. Variables on a nested job make the nested job reusable. For an example of how to use this node, see Deploying a Process Job as a Real-Time Service. |
If Then | Creates an IF THEN expression in a job. For an example of how to use this node, see Creating a Process Job. |
Terminate Job | Controls how a job is terminated. |
Profile Reference | Runs an existing profile in the context of a process job. For example, you could combine data from different data sources into a single table and then run an existing profile against that table. You also could run the data through some data quality steps and then profile it to measure the quality level. |
You can use the SQL nodes to run SQL in parallel, manage custom SQL scripts, write your own SQL, and create or insert content into tables, and so on. SQL nodes process data in that SQL processes data, but can also be used to run any SQL (such as DDL), and multiple SQL statements can be submitted from a single node.
You would use the SQL nodes in a process job to perform the following functions:
Name | Description |
---|---|
Create Table (query reference) | Provides an SQL query that you can use as a template for creating a new table from the results of a reusable query. Then, you can add all of the query results to the new table. |
Create Table (select) | Provides an SQL query that you can use as a template for creating tables with a SELECT statement. The table is added to the process job. |
Insert Rows (query reference) | Provides an SQL query that you can use as a template for retrieving rows from a reusable query and inserting them into an existing table. |
Insert Rows (select) | Provides an SQL query that you can use as a template for inserting rows into a table with a SELECT statement. The table is added to the process job. |
SQL Execute | Enables you to explicitly define, import, or export user-written SQL code. |
Update Rows | Enables you to construct an update against a target table. Use the Update Rows tab to define the target table, the affected fields, and the WHERE clauses. For an example of a job that contains an Update Rows node, see Updating Rows in a Target Table. |
You can use the SAS code nodes to write some code into the node or point to a file that contains some SAS code. A Data Integration license is required to get these nodes.
Name | Description |
---|---|
SAS Code |
Provides a process job node where you can write and store SAS code. You can then execute that code as part of a process job. |
SAS Code Reference |
Provides a process job node where you can point to a SAS code file on the file system or in a DataFlux repository. You can then execute that code as part of a process job. |
JMS Reader | Reads information from a Java Message Service (JMS) and makes this information available in a process job. See Overview of JMS Nodes. |
JMS Writer | Writes information from a process job to a Java Message Service (JMS). See Overview of JMS Nodes. |
Documentation Feedback: yourturn@sas.com
|
Doc ID: dfU_ProcessFlowNodes.html |