DataFlux Data Management Studio 2.6: User Guide
Advanced Properties enable advanced users to select and configure properties that meet specific data needs for the nodes included in data and process jobs. To access the Advanced Properties for a selected data node, right-click the node and click Open Advanced Properties in the pop-up menu. The Advanced Properties dialog includes the following elements:
Enable serial target processing - When selected, specifies that the target nodes (the nodes at the end of each chain) are each processed in their entirety before the next one starts. When serial processing is not enabled and you have two target nodes after a branch both nodes are processed at the same time (one row from A followed by one row from B, and then back to A, and so on). When serial processing is turned on, all of the rows are read from A, followed by all the rows from B.
Set as default target node - When selected, designates the node that will provide the output for the entire job. Use this when you build a process job that will be called from another job using the Embedded Job step. (This function becomes particularly important when you construct a job with an External Data Provider step, because it has many different outputs.)
Processing order - Specifies the processing order for a particular node.
The toolbar in the Advanced Properties dialog enables you to edit a default value for a selected property, clear or delete a selected property, clear the default value for a selected property, and refresh the list or properties.
The following table provides an alphabetical list of the advanced properties available in DataFlux Data Management Studio.
Note: Not every advanced property is available in every node. See specific nodes for details.
Advanced Properties | Descriptions |
_ERRORMSG | |
_WARNING | |
_ELAPSED | |
_START_TIME | |
_END_TIME | |
_SUMMARY | |
1ST_INROW_STARTCLID | If set to true, the first row that enters the node does not contain data to be clustered but has the cluster number of where it should begin. |
1ST_OUTROW_MAXCLID | If set to true, the first row that exits the node does not contain actual data to be clustered but contains the maximum cluster number from the data rows to appear in the output. |
3553_FILENAME | |
3553_LIST_ID | |
3553_LIST_PROCESSOR | |
3553_MAILER_ADDRESS1 | |
3553_MAILER_ADDRESS2 | |
3553_MAILER_LASTLINE | |
3553_MAILER_NAME | |
3553_NUM_LISTS | |
ABBREV_CITY | The abbreviation for the name of the city. |
ABBREV_STREET | The abbreviation for the name of the street. |
ACCEPT_MANY | When set to true, the local:gt() built-in function allows zero or many elements (element()*) as input. When set to false, the local:gt() built-in function allows zero or one element (element()?) as input. The default value is false. When set to true, there are side effects that can affect round tripping XML. The default is false. |
ACTION | The action for the Row Validation condition set earlier. |
ACTION_COND | The condition you created with Field Name, Operation, and Value to the Audit Expression list. |
ACTION_NAME | |
ACTION_VALUE | |
ACTION_VALUE_TYPE | |
AD_MODE | Select the way addresses are handled during address verification. Validate = 0; Parse = 1; Parse and Standardize = 2. The default is set to 0. |
ADD_FIELDS | The fields that will be made available for the next step in your job flow. |
ADDITIONAL_NOTES | Enables you to add a one-character note to an Address Update Lookup node. NCOA documentation states that the literal A in this field denotes that you have requested a longer processing period. |
ADD_LINEFEEDS | When true, linefeeds are inserted for every element. The default is false. |
ADDR_CA_LVR | If this flag is set, LVR (Large Volume Receiver) address correction is enabled for Canadian addresses. Using standard SERP rules, all LVR addresses are automatically considered valid. When this flag is set, the Daemon will attempt to correct them using the same process as non-LVR addresses. |
ADDR_CA_PASS_VALID | If this flag is set, Canadian addresses that are valid on input (require no corrections to be deliverable) will be passed through the function without any changes. Otherwise, changes that do not affect the validity (for example, changing Street to St) can be applied. |
ADDR_CA_RURAL | If this flag is set, rural address correction is enabled for Canadian addresses. Using standard SERP rules, all rural addresses are automatically considered valid. When this flag is set, the Daemon will attempt to correct the address using the same process as urban addresses. |
ADDR_GUESS_COUNTRY | |
ADDR_OUTPUT_LATIN | If set to true, the output from dfIntelliServer is forced to Latin instead of the native language (for example, kanji). Data in non-European languages will be transliterated. Default is set to false. |
ADDR_PROPERCASE | |
ADDR_RETURN_INVALID | If this flag is set, instead of raising an error when the input address is invalid (undeliverable), the function returns an indication of the invalid address and returns error information |
ADDR_US_CASS | If this flag is set, strict CASS rules are used for US addresses. By default, a more aggressive matching strategy is used, which can result in additional corrections that are not allowed using strict CASS rules. |
ADDR_US_DPV | |
ADDR_US_ELOT | If true, will include eLOT results for US address verification. This should be used only when eLOT data is installed. This does not affect non-US addresses. |
ADDR_US_LACS | This does not affect non-US addresses. |
ADDR_US_RDI | This does not affect non-US addresses. |
ADDRESS_TYPE | |
ADDRESS_LINE_SEP | |
ALTERNATE | Reads rows from left and right instead of all from the left and then all from the right. With this setting, child rows are requested from both sides evenly so no data is cached. |
ALTERNATE_HANDLING | This controls the handling of variable characters like spaces, punctuation, and symbols. When this option is not specified (using the default value Non-Ignorable), differences among these variable characters are of the same importance as differences among letters. If the ALTERNATE_HANDLING option is specified, these variable characters are of minor importance. The default is NON_IGNORABLE. Note that the SHIFTED value is often used in combination with STRENGTH= set to Quaternary. In such a case, spaces, punctuation, and symbols are considered when comparing strings, but only if all other aspects of the strings (base letters, accents, and case) are identical. |
APPEND | If set to true, appends the data to the file instead of overwriting the existing text file. This Advanced Properties is set to false by default. |
APPEND_RESULT | |
AUDITFILENAME | The name of a file that will contain audit information about the duplicate elimination operation. |
BACKUPNUMROWS | |
BASE/SORTBYTES | Specifies the bytes to use when sorting |
BASE/SORTMERGES | Enables merge during sort |
BASE/SORTTEMP | Specifies the temporary path for sorts |
BASE/SORTTHREADS | Specifies the number of sort threads |
BF_LOCALE | |
BF_PATH | The Blue Fusion path. |
BIND_USING_TABLE_TYPES | When this Advanced Property is set to false the step Data Type is used for binding. Default is true. |
BLANKISNULL | |
BLANKTONULL | If true, blank fields will be treated as null values and they will not be considered. |
BLOCK_SIZE | Tells the node in what size blocks to return the input files text as output. |
BUF_SIZE | The maximum number of messages to buffer before passing them from the C++ layer to Java. Defaults to 0 (no buffer). |
BUFFERCOUNT | The maximum number of buffers that are used to hold the XMLStreamWriter output that is then input into the XQJ support. The default is 5. |
BUFFERSIZE | The size of each buffer holding the XMLStreamWriter output that is then input into the XQJ support. The default is 8192. |
BULK_ROW_COUNT |
This allows you to process a set number of rows at a time. Enter the number of rows to be processed in the Property Value field when you have thousands of records. This option will help your system run more efficiently. The default value for the BULK_ROW_COUNT option is 10,000. Note: Bulk row count is supported on databases such as Oracle SQL Server and DB2. There is no need to activate a bulkload option at the driver level in the data connection for the table. |
BY_GROUP | This tells the job to sort it by the selected group.When a key differs, an indicator is returned informing the caller that retrieved row is the begins a new group with the same key. This is useful when clustering by a single key or similar processing. |
BYTE_ORDER_MARK | This setting is used for UTF-8, UCS-2BE, or UCS-2LE encoding. Default is set to true. Only change to false if you do not have a need to process the byte order in your Microsoft Windows system. |
BYTES_TO_SKIP | This is the number of spaces to skip in the beginning of your data file. Default is 0. |
CANADA_PATH | |
CASE_FIRST | |
CASE_SOURCE | |
CATALOGNAME | Used for database connection. |
CHART_TITLE | The name for the chart file. |
CHUNKSIZE | Specifies the amount of data available to each sort thread (integer). When sorting is not involved, this specifies the size of the internal datafile buffer. |
CID_COL | Identifies the column containing the cluster ID. |
CLASSNAME | This is the Java class name created in Java. This Advanced Property is used with the Java Plugin. |
CLASSPATH | This Advanced Property indicates additional Java classpaths. When specified the given set of classpaths will be added to the set of classpaths used during the execution of the Java Step. |
CLUST_ID_FIELD | The input field that contains the numeric cluster identifier. |
CLUSTER_FIELD | The input field that contains the numeric cluster identifier. |
CLUSTER/BYTES | This is the total number of bytes used for clustering. |
CLUSTER/TEMP | This is the location of the temporary files for clustering. Also, the location where log files are written when logging is enabled. |
CLUSTIDFIELD | The input field that contains the numeric cluster identifier. |
COLLATION | This determines how things are collated. A value of UPPER sorts uppercase letters first, then the lowercase letters. A value of LOWER sorts lowercase letters first, then the uppercase letters. If you do not select a collation value, then the user's locale-default collation is selected. Linguistic collation allows for a locale appropriate collating sequence. |
COLLECTION_VAR | If the XML parameter defines a collection, the name of the external string that should be set to the value of the XML parameter can be identified in this parameter. |
COLUMNS | Represents the column values. |
COMBINED_VERIFY_METHODS |
Improves address accuracy. This property is set to OFF by default. This is setting is best for the performance of the Loqate node. If you want to increase the accuracy of Loqate processing, set this property to ON. |
COMMANDS | Enter the command information including Type and Arguments. |
COMMIT_EVERY | How often a commit command should be sent to the data source. |
COMMITMODE | Changes are committed each time this number of rows has been written. For example, 0 = every row, 1 = single transaction, and so on. |
COMPACT | |
COMPAT_EXACT_MATCH | This Advanced Property is available for jobs created prior to version 8.0. If your old job is loaded in version 8.0, this option is set to TRUE so you will see the previous unpadded match code. When a new job is loaded into version 8.0, the default is set to FALSE and you will see the new match codes. |
CONCAT_SOURCE | The fields and literal text you have specified for concatenation. |
CONCURRENT_PROC | Specifies whether concurrent processes are used. The available values are Y and N. |
CONDITIONS | |
CONNECTSTRING | The string used by the Entity Resolution File Output node to connect to the database so that it can apply changes interactively. |
CONNOPTS | Space-separated string of key=value pairs used to create the initial JNDI context. (JNDI-instance-specific options that might be required for a client to make a JNDI connection.) |
CONVERT_FIELD_MAP | The Advanced Property is available to convert field maps. |
CORR_ACCENTS | If true, accent characters are added in outputted fields. |
CREATE3553 | |
CREATE_SOA | If true, a Statement of Accuracy form is created. Default is false. |
CRLF_TERMINATE | If true, limits the field length to correspond with the carriage return (Boolean). |
CTXFACTORY | The fully qualified class name of the JNDI provider context factory. |
CURRENT | This is a flag to indicate whether to show only the current version of the domain. If TRUE, only the latest domain version is listed and if FALSE all domain versions are listed. The default is TRUE. |
DASH_IN_ZIP | |
DATA | Represents the data input for the node. |
DATA_CACHE_OVERRIDE | |
DATA_CACHE_PCT | |
DATA_FILENAME | This is the file path for the data file. |
DATAFIELD | |
DATAFILE_IS_ASCII | This Advanced Property refers to the Data format field. The options are ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code). Default is false. |
DATA_RETURNED | Specifies the type of data returned. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
DEBUG_LOG_FILE | Enter a pathname to create a debug log file. To create a log file, use a pathname to set this option. Default is _null.
Note: The log file is appended each time the job is run or when you preview a job. This file can become rather large quickly. |
DEBUG_STAGE | Special parameter for debugging only. |
DEFAULT_COUNTRY | |
DEFLOCALE | |
DEF_OVERRIDE | Enables you to override how the field is created (for example: "CHAR(255)") . This property can be set in two ways. One way is to click the Override button in the standard properties dialog for the Data Target (Insert) data job node. The other way is to display the advanced properties dialog for the Data Target (Insert) data job node, right-click the Fields property, and then select Edit Default Value. |
DELETEFLAGS | Delete flag field name for each primary key identified on the Primary Key and Edit Fields tab. |
DELETE_ROWS | If true, deletes all existing records before adding results. |
DELAYMEMALLOC | The DELAYMEMALLOC advanced property is closely tied to the memory allocation properties for clustering. If set to true, the clustering memory chunk will be allocated only after the step obtains the first data row from the parent. When true, memory can be released and made available for later clustering node calls although if the memory is not freed, memory is over-allocated for your job or service. If set to false (default), all clustering memory will be allocated prior to the first row passing through the entire job. When using DELAYMEMALLOC, you might not find out the memory is over-allocated until half-way through processing the job. When DELAYMEMALLOC is set to 0, if memory has been over-allocated, you will know prior to running the job or service. Note: For all sorting and clustering memory settings, you can create macros so memory settings for different job or service types can be independent of the default macros used globally. |
DELIMITER | The type of separator (delimiter) to use for separating data fields. |
DESCRIPTION | |
DESTINATION | |
DICTIONARY_ENTRIES | |
DIMENSION_NAME | |
DOM_TYPE | The type of domain input. This can either be a "value" or "field". |
DOM_VAL | If DOM_TYPE is "value" this is the name of the domain and if it is "field" it will be the name of the parent field that contains the domain name. If the DOM_TYPE is "value" and DOM_VAL is NULL all domains and their items will be listed. |
DOMAIN | The Domain associated with the logged-in user for credentials lookup. |
DONTCLUSTFLD | |
DSN | The data source name (database name, directory, database driver, User ID, password, and more) that connects you to the database. |
DURABLE_CLID | This is checked only if DURABLE_NAME is configured. Sets a JMS client ID string for cases where JMS Provider does not have one. If client ID string is set on both ends or on neither end, the initialization will fail. In the vast majority of cases, the JMS Provider already has it configured. |
DURABLE_NAME | The unique name for a durable subscription to a topic. If durable subscription fails the node will not initialize (see log for error). |
EDITFIELDS | Fields that had Field Level rules applied to them in the SRI step and contain the content that will be salvaged from the records by the edit rules. |
ENABLE_DDL | This setting tells the node to verify that the table exists before trying to insert or update any values. The default value is true. Note that this only applies to insert and update nodes. |
ENCODING | This setting is used to specify encoding constants. Default is -1. |
END | The ending value for the FOR loop. |
END_DATE | The ending date for the row's interval. |
END_ID | The ending id for the row log. |
END_OF_LINE | The type of action at the end of the line, CR (carriage return), LF (line feed), CR+LF (carriage return plus line feed), or NEL (an IBM setting for new line). Default is 0. |
EOF_MODE | The type of action at the end of the line, CR (carriage return), LF (line feed), CR+LF (carriage return plus line feed), or NEL (an IBM setting for new line). Default is 0. |
EOF_VALUE | The type of action at the end of the line, CR (carriage return), LF (line feed), CR+LF (carriage return plus line feed), or NEL (an IBM setting for new line). Default is 0. |
EXEC_EXPRESSION | The main expression code. |
EXEC_ID | If the EXEC_TYPE is a value, it uses the field as a number to pass in. If the EXEC_TYPE is a field, it takes a field name from the parent. |
EXECROWID_VAR | An output variable used to generate a Process Summary report in the Address Update Lookup node. |
EXEC_TYPE | The input can be a value, macro, or used as a parent input field. |
EXECUTION_PROPERTIES | Enables you to specify values for properties that are applied when the node is run. |
EXCEPTION_ACTION | |
EXCEPTION_INFO | |
EXCLUDE_SOURCE_FIELD | Specifies whether to include the source field contents in the node's output. |
EXPECTED | The input and output fields to this job. |
EXPRESSION | The body of the expression to execute. |
FACTORY | The JNDI name of the JMS connection factory. |
FIELD_MAP | The generic property that lets you rename fields that enter the step before they exit the step. So a field called "NAME" might be passed from the previous step but someone might want to rename it "FULL_NAME" before it is passed through to the next step. Columns TYPE_OVERRIDE and LENGTH_OVERRIDE also allow changing the data type or size (or both). |
FIELDNAME | The name for the output field that will contain auto-numbered values. |
FIELDS | The fields you can make available to the next step in your job flow. |
FILENAME | The name and location of your text file data source. |
FILENAME_FIELD | Where the record comes from. Default is _null. |
FILTER_DATA | This option represents either a field name for a parent node or a macro/constant value. |
FILTER_TYPE | This contains a field or value. |
FIXED_WIDTH | Specifies that data is treated as fixed width versus variable width. A Boolean that when set to true indicates the rows should be sorted as fixed width rows versus the default variable width. When set to true, SORT_KEEPVAR is not utilized. |
FLATFILEDELIM | The type of separator (delimiter) to use for separating data fields. |
FLATFILENAME | The name of the file that will contain the output records from the duplicate elimination operation. |
FLATFILETQUAL | The character that marks the beginning and end of the data. |
FLD_CORRLID | The name of the field containing the message header's correlation ID value. |
FLD_MESSAGE | The Name of the field (data nodes) or input/output (process nodes) containing the message body. |
FLD_PROPS | The space-separated list of the message's properties names; each will become a node's input/output field. An element in the list (message's property name) can contain only alphanumeric and underscore characters. |
FLD_TYPE | The Name of the field containing the message header's type value. |
FLD_MSGID | The name of the field containing the unique message header's ID value (set by JMS Provider). |
FLD_MSGTRUNC | The name of the field containing a Boolean flag for whether the outputted message was truncated. |
FORCE_CHOICE | If set to true and the locale could not be guessed, then the last locale you passed in will be used. |
FORCE_COUNTRY | The ISO 3166-1 alpha-3 (three-char, ex. "FRA") code which should be used for all input records. |
FREQ_DATA | The fields you have selected for your chart. |
GENDER_SOURCE | |
GENDPARSED_DEF | |
GENDPARSED_TOKENS | |
GENERATE_ROWS | If true, generates rows even when no parent is specified. |
GENNEWSURVREC | |
GEO_PATH | |
GEOCODING | This enables geocoding. |
GROUP_FIELDS | This holds a list of fields. If one or more fields are listed here, the expression becomes a group by expression, and special actions are taken for each group. A group is defined as a set of data passing through the node that all has the same values for these fields. Note that the data should be sorted by these fields; otherwise results will not be as intended. |
GROUP_ID | Used to group sub-lists that might be create to split a job for multi-threaded processing or processing on separate machines. Job summaries are grouped by the Group ID when monthly NCOA statistics are calculated. |
GROUP_INIT_EXPRESSION | Whenever a new group of data is encountered, the expression entered here will be executed. It will be executed before the first row of data has executed the expression. It will also be executed before any rows are read, but after the INIT expression is executed. |
GROUP_NAME | This option specifies the monitor report group (dashboard) name. |
GROUP_TERM_EXPRESSION | After the last row of data in a group is encountered, this expression will be executed (after the row's expression is executed). It will also be executed after all rows are read, but before the term expression. |
GROUPBY | The set of retained columns that repeat as groups. If specified and XMLMAP_RETAIN is specified, this option takes precedence. GROUPBY specifies the name of the incoming field to group by. |
GROUPNULLS | |
GUESS_DEF | |
GUESS_COUNTRY | |
GUESS_FIELDNAME | |
HASHBUCKETS | Controls the number of hash buckets per table. The default hash table size is 1M (1024x1024) buckets. |
HEADERS | Specifies a header option for either the HTTP Request data job node or the Web Request data job node. This option consists of Name and Value columns. The Name column contains the name of the HTTP header to set. The Value column contains the value to associate with the named HTTP header. Note that if the "Content-type" header is set in this HEADER option and the WSCP_HTTP_CONTENT_TYPE property is set, the WSCP_HTTP_CONTENT_TYPE property value is used. |
HIGH_MATCH_RATE_DESC | Provides a description for a match rate that exceeds 20 percent. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
ID_BYTE_POSITION | This is the position of the first character. Default is 0. |
ID_PIC_STATEMENT | This is the ID PIC. This field has many options. |
IDENTITY_SOURCE | |
IDENT_DEFN | |
IGNORE_ERRORS | Specifies whether errors should be ignored during processing. |
IN_CLUST_ID_FLD | This is the Cluster ID output field for the Sub-Clustering node. |
IN_FIELD_CITY | The input field for city. This field is blank by default. |
IN_FIELD_STATE | The input field for state. This field is blank by default. |
IN_FIELD_ZIP | The input field for zip. This field is blank by default. |
IN_TO_EXT_MAP | The fields with data you want to pass to the external job. |
INCLUDE_HEADER | If true, creates a row of field names at the top of your text file to identify each output field. |
INDICATE | If true, then an aggregation indicator column is output. |
INDICATE_FIELD | The name of the indicator column, if one is requested. If (Null), a default name is used: __aggregated__. |
INIT_EXPRESSION | The pre-processing expression code. |
The matrix in which the input source is defined in a Web Service Operation that accepts input. This is a table parameter that contains the following columns:
|
|
INPUT_BYTE_ORDER_MARK | If true, the first few bytes of the read data are checked to determine if they are a byte order mark. If a byte order mark is present, the remaining data is assumed to be in the encoding identified by the byte order mark (Boolean). |
INPUT_COUNTRY | |
INPUT_DELIMITER | Specifies the type of field separator in use (string). |
INPUT_ENCODING | The encoding of the data to be read. The integer represents the selection from the encoding drop-down list (integer). |
INPUT_FIELDS | |
INPUT_FIPS | |
INPUT_FIRM | |
INPUT_LAST_LINE | |
INPUT_LINE1 | |
INPUT_LINE2 | |
INPUT_PHONE_NUMBER | T |
INPUT_POSTAL_CODE | |
INPUTS | |
INPUT_TYPE | Describes the type of the input parameter value. Must be one of the following values:
|
INPUT_TYPE | When referring to a domain, this can either be a "value" or "field". |
INPUT_VAL | If INPUT_TYPE is "value" this is the name of the domain and if it is "field" it will be the name of the parent field that contains the domain name. If the INPUT_TYPE is "value" and INPUT_VAL is NULL all domains will be listed. |
INTERNAL_DB | Specifies the use of an internal database. Set to I when the list or database is owned by the licensee. |
INTERVAL | The integer used as the sequence interval. |
IS_FIXED_WIDTH | This Advanced Property is set to true if the records are fixed width. Default is false. |
ISFILE | |
JOB | The name of a referenced job file, URI, or resource identifier. Can also contain the actual XML of an Architect job |
JOB_FILE_NAME | Specifies the job flow that you are adding to the current job flow. |
JOB_ID | If a job type is a value, it uses the field as a number to pass in. If type is field, it takes a field name from the parent. |
JOB_NAME | The job name for the job to be run for the Realtime Service. |
JOB_TYPE | The input is either a value/macro or uses the parent input field. |
JOBCODE | A job code is an ID that uniquely identifies a job. Use this Advanced Property when importing tasks from one repository to another and managing what jobs are referenced. JOBCODE is different from JobID, which is just a numeric listing of jobs. The JOBID is not unique and represents a listing order of jobs. |
JOBID | |
JOIN_NULLS | This determines whether nulls are joined together in a join. |
JOINBYTES | The JOINBYTES advanced property defaults to 8 megabytes. This property is only in effect when doing a memory join. It determines how much memory to use for the memory join. |
JOINS | The relationships you have selected. |
JOINTYPE | The type of join. |
KEEP_FIELDS | Specifies the set of input fields that are to pass through as output fields. |
KEY_VALUES | The key_value pairs to be passed into the embedded job. |
LAYOUT_TYPE | This field should have COPYBOOK, LAYOUT, or JOB. |
LAYOUTS | This Advanced Property contains the path to the Data File. |
LINE_END | The line terminator. |
LINGUISTIC | The default for the LINGUISTIC property is NULL, which means that DataFlux Data Management Platform sorting is used. When the LINGUISTIC property is TRUE, SAS linguistic sorting will be used. When the LINGUISTIC property is TRUE, and no locale is specified in the LOCALE field, then the locale associated with the process is used. |
LIST_RECEIVED_DATE | Specifies the date on which the list was received. Uses the following format: YYYYMMDD. |
LIST_RETURNED_DATE | Specifies the date on which the processed list was returned. Uses the following format: YYYYMMDD. |
LOADALL | If true, all data from the data source(s) is placed on the local machine before the job continues. |
LOAD_FLAG | This Advanced Property can be set to US (to load US data), CAN (to load Canadian data), or USCAN (to load both). |
LOCALE | |
LOCALE_FIELD | |
LOCALE_LIST | |
LOCALES | |
LOOKUP_DSN | The DSN of the database you are looking in to find the matching piece of information. |
LOOKUP_KEYS | The node's field mapping expressions. |
LOOKUP_METHOD | Controls how the NCOA lookup treats names. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
LOOKUP_SCHEMA | The database schema information, where applicable. |
LOOKUP_TABLE | The input field where you select the table used when running the comparisons against the incoming data input fields from the previous step. |
MAIL_CLASS | Specifies the class of mail for the list that is being processed. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
MAILING_ZIP | Specifies the ZIP code from which the mailing is to occur. |
MANUAL_EXPR | |
MANUAL_FLAG | |
MAPPING | |
MATCH_ONLY | Set this option to true if you want the output to contain records that have a match. Default is false. |
MATCHCODE_SOURCE | |
MATCHDEF | |
MATCHPARSED_DEF | |
MATCHPARSED_SENS | |
MATCHPARSED_TOKENS | |
MAX_ERROR_COUNT | This Advanced Property allows you to set a threshold for maximum errors. If set to -1, infinite errors are allowed. |
MAX_FIELDS | This option specifies the maximum number of fields that can be returned. |
MAX_OUTPUT_ROWS | The maximum number of rows you want to be read from the database. When left _null, all rows are processed. |
MAX_PROPS | This option specifies the maximum number of properties to show. |
MAX_RAM | The amount of memory (in bytes) to use during frequency distribution. For optimal results use a value that is the power of 2. If no value is entered (_null) the amount of memory used will default to the amount of memory defined for sorting operations. |
MAXMSGLEN | The maximum number of characters in a message (additional characters are truncated). |
MAXPROPLEN | The maximum number of characters in any of the message properties (additional characters are truncated). |
MED_MONTHS_FILTER | Specifies the move effective date on which the address move is to go into effect in the Address Update lookup node. You can use this value to constrain the results to addresses that have changed within a specified number of months. For example, if MED_MONTHS_FILTER is set to 6, results will include only addresses that have changed in the last 6 months. The minimum value is 6. |
MEMLOAD | The option you are using to load one of the data sources in memory. |
MEMSIZE | |
MEMSIZE_MBYTES | |
MESSAGE | The message body. |
METADATA_ONLY | |
METHOD | The HTTP method to use when contacting the specified HTTP address. Currently supported values are as follows:
|
MSG_TEXT | (AVAILABLE ON DATA NODE ONLY) This is a static text string to send as message body. This option or FLD_MESSAGE must be specified, but not both. With this option set the node can be used without a parent. (Not needed in process node as the existing FLD_MESSAGE input can be set to a static text string value) |
MULTI_REC_CLUSTERS_ONLY | Only display multiple record clusters. |
NAME | The name of the event to listen for. If not supplied, events with any name will be caught, unless filtered by the other two variables. |
NAME_VALUES | The fields that you define to map input and output to the COM dll. |
NAMESPACES | The set of NAMESPACES required in the resulting XML. This property should not be necessary when executing an XQuery. The XQuery should already define the valid/expected set of NAMESPACES required to process the XQuery. |
NAMEVALUES | The name of the Java Plugin node. |
NEW_CLUST_ID_FLD | This is the output column name for the cluster number. |
NOCODEGEN | |
NODUPS | |
NON_CASS | |
NULLMATCHCODES | |
NUMERIC | |
OFFSET_FIELD_IN | OFFSET_FIELD_IN and OFFSET_FIELD_OUT are used together to maintain links between the data passing through the node and data produced by the node. This property is the name of the integer input field in the embedded job (optional). |
OFFSET_FIELD_OUT | OFFSET_FIELD_IN and OFFSET_FIELD_OUT are used together to maintain links between the data passing through the node and data produced by the node. The name of the same field as output from the embedded job (optional). |
OFFSET_TEMP_PATH | Overrides the default temporary directory. If this option is not specified, then the default temp directory is used. |
OPERATION | Designates an operation to perform in the node. The following operations are available:
|
OPERATION | Enables you to select an operation to be performed in the Calculated Field node. The default operation is MEAN. |
OPERATION_MODE | This is the type of lookup performed. |
OPERATOR_ID | Specifies the operator responsible for processing the list. |
OUT_FIELD_CITY | This is the field name for your output. Default is Lookup City. |
OUT_FIELD_RESULT | This Boolean field shows if the City/State/Zip combination is true or false. If it is true, it is a valid combination. If the result is false, it is not a valid combination. |
OUT_FIELD_STATE | This is the field name for your output. Default is Lookup State. |
OUT_FIELD_STATUS | When you select the Validation Status, you will see Valid, City/State/Zip combo is invalid, or Invalid Zip. The Validation Status field is optional. Default is _null. |
OUT_FIELD_ZIP | This is the field name for your output. Default is Lookup Zip. |
OUT_KEY_VALUES | Consists of a KEY and OUT_KEY column. The KEY column is the name of the variable in the embedded job that you want to extract, and the OUT_KEY is the name of the variable in the parent job that you want to set with that value. |
OUT_NUM_ITEMS | This is the number of fields that appear in the output. Default is 1. |
OUTCASEID | |
OUTCASETYPE | |
OUTCLUSTSIZE | The size of the output cluster. |
OUTCOLLAPSEID | |
OUTMATCHCOND | |
OUTMATCHCONDCT | The frequency count of match condition fields. |
OUTPUT_BYTE_ORDER_MARK | If true and writing the data in a Unicode-based encoding, the byte order mark will be written at the beginning of the output (Boolean). |
OUTPUT_CASING | The style of casing to use for output fields. Valid options are Upper, Title, or Lower. |
OUTPUT_COL | Specifies the column that contains the output of a calculated field. |
OUTPUT_DELIMITER | Specifies the type of field separator to use (string). |
OUTPUT_ENCODING | The encoding in which to write the data. The integer represents the selection from the encoding drop-down list. |
OUTPUT_FIELDS | The fields that will be made available to the next step in your job flow. |
OUTPUT_LATIN1 | If set to true, the output from dfIntelliServer is forced to Latin instead of the native language (for example, kanji). Data in non-European languages will be transliterated. Default is set to false. |
OUTPUT_LINE_END | Identifies the line terminator to write at the end of each row (string). The field can be empty. |
OUTPUT_LOCALE_FIELD | This Advanced Property stores the locale used to generate the match code (for example, ENUSA). Set to null by default. |
OUTPUT_MATCHCODE_VERSION | This Advanced Property stores the version of the QKB used to generate the match code (for example, 2007A). Set to null by default. |
OUTPUT_MAXLENGTH | Specifies the maximum length of the HTTP request output |
OUTPUT_NULL_ROWS | Specifies whether an output row should be generated if the input field value contained no terms. |
OUTPUT_RULESCORE | If this option is true, the node creates an output field for each rule's score. Default is false. |
OUTPUT_SCRIPT | The ISO 15924 Code in which the output should be encoded, if possible. This is typically "Native" or "Latin". Alternatively Native should be specified to choose the correct country specific script value. Other legal values include: Cyrl - Cyrillic (Russia), Grek - Greek (Greece), Hebr - Hebrew (Israel), Hani - Kanji (Japan), Hans - Simplified Chinese (China), Arab - Arabic (United Arab Emirates), Thai - Thai (Thailand), and Hang - Hangul (South Korea). |
OUTPUTNAME | The list of available input fields that can be used as output fields for the Entity Resolution File Output step. |
OUTPUT | The matrix in which the input source is defined in a Web Service Operation that accepts output. This is a table parameter with the following columns:
|
OUTPUTS | |
OUTPUT_TYPE | Describes the type of the output parameter value. Must be one of the following values:
|
OUTPUTTYPE | Where the output is sent. This can be one of SOURCE, FLATFILE, or AUDITONLY. |
OVERRIDE_PAF_EXPIRATION | When selected, node ignores the PAF expiration date in the Address Update Lookup node. If not selected, PAF IDs that refer to PAFs with PAF_SIGNED_DATEs more than 1 year from the current date causes an error. This setting provides a grace period for users to run jobs in situations where a PAF is being renewed but the paperwork might not have been processed. |
OVERRIDE_SERVER | I |
OVERRIDE_WIDTH | This is the record length. Default is _null. |
PACKET_SIZE | This is the packet size for the Realtime Service. Default is _null. |
PAF_ID | Specifies an 18-character identifier for the input to an Address Update Lookup node. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
PARAM_DSN | The data source name (database name, directory, database driver, User ID, password, ;and so on) that connects you to the database. |
PARAMETER | The value to assign to the variable or a value to increment/decrement it by (if OPERATION is ADD or SUB). If using ADD or SUB, this must be numeric, and the current value of the variable should be numeric. |
PARAM_FILE | Specifies the path to the file that contains the parameter XML. |
PARAM_SOURCE | The fields designated to be used as parameter inputs for this node's SQL query. |
PARAM_SQL | The SQL statement you constructed in the Query text area that will be executed against the data source. |
PARAM_XML | Specifies the parameter XML. |
PARSE_DEF | |
PARSE_FIELDNAME | |
PARSE_INPUT_LENGTH | This is the expected length of the field to be parsed. Possible values include _NULL, _SHORT, _AUTO, or _LONG. The default value is _NULL. Use NULL or SHORT if the parse length is small. Use AUTO if you want the node to select the correct parsing method. Use LONG if your parse length is three or more words. |
PARSE_TOKENS | |
PASS_SCORE | Use this setting to determine what constitutes a pass or fail case for the sum of all rules. |
PASSTHRU | |
PASSTHRU_FIELDS | |
PASSWD | The password for connecting to the JMS provider (encrypted via a BFCRYPT-based tool). |
PASSWORD | Use to specify a password. |
PATTERN_SOURCE | |
PERCENTILE_INTERVAL | |
PERSIST | |
PK_COL | The column containing the primary key. |
PK1_FIELD | The Primary key field is a unique number used to identify each row in a table. |
PK2_FIELD | The Primary key field is a unique number used to identify each row in a table. |
POLL_INTERVAL | The Web service operation poll interval. |
POOL_NAME | Specifies a particular connection that can be used for all ODBC nodes that have the same value. If the connection doesn't exist, it will be created and subsequent connections to that pool will use the created one. |
PORT | The server port number used with the Realtime Services node. |
POSTPROCESSED | Specifies whether the data was post-processed. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
POST_SQL | Code that is submitted after the SQL_STMT (post-code). |
PRELOAD_COUNTRY | |
PREPROCESSED | Specifies whether the data was pre-processed. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
PRESERVE_WHITESPACE | If true, keeps the white space that appears in field values. |
PRESERVENULL | |
PRE_SQL | Code that is submitted before the SQL_STMT (pre-code). Note that if the delete table is selected, then this parameter will contain the drop table code. |
PRICLID | |
PRIRECNUM | |
PRIM_KEY_FIELD | |
PRIMARY_KEY | The relationship (key) between the existing table and the results of your job flow. |
PRIMARYKEYS | The input field selected as primary key fields for the incoming data. |
PRIORITY | The message priority, 0 through 9, where a larger number indicates higher priority. |
PROCESS_CATEGORY | Specifies a reason for running a job that contains an Address Update Lookup node. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
PROGID | The name used to instantiate the COM object. |
PROGRAM_NAME | The external program is identified by a string containing the fully qualified path to the command and the command arguments. |
PROPERCASE | |
PROVIDERURL | The server host/port information for a JNDI connection. |
PURGEACTION | If true, single records are deleted. This can be either DELETE or FLAG. |
PUSHROW_FIELDNAME | If a field name is specified here, then that field will be created in the output. It will hold the value true if the row resulted from a pushrow() action, and false otherwise. |
QUALIFIER | The character that is inserted at the beginning and end of each field. |
QUOTE | The character expected at the beginning and end of each text field value. |
READMODE | Can be SINGLEROW if you want to read a single row. In this case, ROWINDEX should be specified). Can also be NEXTROW. NEXTROW reads the following row from the last time it ran. |
READONLY | Set to true for Read Only. This Advanced Property is set to false by default. |
RECONNDELAY | |
RECORDWIDTH | The total number of bytes in a record (string). |
RECSPERTRANS | If selected, this Advanced Property sets the transaction for a specified number of rows. This option is set to 10000 by default. |
REMOVE_DUPS_FLAG | |
REPLACE_NULLS | If set to true, this Advanced Property enables you to surface a null character. Default is false. |
REPOS_NAME | This is the name, file, or DSN, depending on type. |
REPOS_PREFIX | If REPOS_NAME is DSN then you can also have a prefix. |
REPOS_TYPE | This is the type of repository (name, file, or DSN). |
REPOSITORY | |
REQUEST | The Web Service request XML template contains the Web Service request and possibly defines the variables within the Web Service request. The request input parameters are initially populated using the variables defined in the request XML template. |
RESPONSE | The Web Service response XML template contains the Web Service response and possibly defines the variables within the Web Service response. The response output parameters are populated using the variables defined in the response XML template. |
RESPONSE_HEADERS | Enables you to expose the response headers in HTTP requests as output in order extract information. The headers contain NAME, OUTPUT, and OUTPUT_MAX_LENGTH columns. |
RESOURCE_LEVEL | |
RESULT_FIELD | |
RETVAL_FIELDNAME | If a field name is specified here, then the field will be created in the output. The field will be set to "true" or "false" according to what the expression returned. Normally, returning "false" from an expression results in no row appearing in the output. If no field name is specified here, then the value will be set to false for the field and the row will be returned from the expression (for example, not filtered). |
ROW_ESTIMATE | If you know approximately how many rows a query will return, set this to get a percentage complete status. |
ROW_FIELD | The name of the incoming column that contains the XML grouping indicator (sequence number). |
ROWINDEX | The index of the row in the table to read. The first row is row 0. |
ROW_TYPE | This Advanced Property can return headers, data, or both. |
ROWSET_ROWS | Indicates the maximum number of rows to place in each RowSet. RowSets are used to pass rows between the DMP process and the Java Proxy process. The default value is 100. |
ROWSET_SIZE | If the ROWSET_ROWS property is unspecified, the number of rows per RowSet is calculated based on the dividing the suggested RowSet size by the maximum width input/output row. If both the ROWSET_ROWS and ROWSET_SIZE advanced properties are not specified, the value of the BASE/ROWSET_SIZE configuration property is used. If all three of the properties are not specified, the default value of 64m is used. |
ROWSETS | The maximum number of queued ROWSETS. The default value is 2. |
RULE_ID | The input can be a value, macro, or used as a parent input field. |
RULE_TYPE | If the RULE_TYPE is a value, it uses the field as a number to pass in. If the RULE_TYPE is a field, it takes a field name from the parent. |
RULES | This is the expression to create column comparisons using expressions to match data (or remove data from a match cluster). |
SAS_CODE_FILE | A file residing on the local machine containing SAS language statements. These statements are sent to the SAS workspace server denoted with the inputs SAS_CODE_HOST and SAS_CODE_PORT for execution by the SAS system. |
SAS_CODE_STRING | A text string composed of SAS language statements to be executed on a SAS workspace server. |
SAS_DOMAIN | This experimental option specifies the authentication domain used to establish credentials. |
SAS_HOST | The machine running a SAS workspace server. SAS statements appearing in a file or as input are executed by this machine. |
SAS_LIST_STATUS | The SAS listing generated by submitted SAS code; appears in a text window as the job runs. |
SAS_LIST_WORKTABLE | The SAS listing generated by submitted SAS code; written into a work table. |
SAS_LOG_STATUS | The SAS log generated by submitted SAS code; appears in a text window as the job runs. |
SAS_LOG_WORKTABLE | The SAS log generated by submitted SAS code; be written into a work table. |
SAS_MACROS | A table input parameter that has 2 columns: SAS_MACRO_NAME and SAS_MACRO_VALUE. These values are displayed in the log when the SAS code is submitted. |
SAS_PASSWORD | Your password. |
SAS_PORT | The port used for communication with the SAS workspace server. |
SAS_STEPEVENTS | This experimental property specifies input that determines whether the SAS code is executed in batch or interactively. If the input has no value, the SAS code is sent to the server, and control returns immediately to the caller. No events tracking the execution of the SAS code are delivered. If the input has a value equal to TRUE, the SAS code is executed in such a way that events generated by the executing code are delivered. In this way, progress of the executing SAS program can be monitored. |
SAS_USER | Your user name. Without SAS_CODE_DOMAIN, this user name and the password are used to gain access to the SAS workspace server. |
SAVEOLDSTATE | |
SCHEMA_NAME | The database schema name, where applicable. |
SCHEMANAME | The database schema name, where applicable. |
SCHEME_FIELD | The database scheme field. |
SCHEMETYPE | |
SCORE_ALG | Selects the scoring algorithm. Can be one of NONE, MAXIMUM, MEAN, MEANSCALE, MEDIAN, or MINIMUM. If NONE, no score is computed. |
SCORE_COL | The column containing the score to compute. If (null), no score is computed. |
SCORE_DEPTH | |
SCORE_MODE | This is the method for passing the defined rules. The options include: 0 = use total sum of scores; 1 = pass if any one rule passes or 2 = fail if any one rule fails. The default is set to 0. |
SECCLID | |
SECRECNUM | |
SELCOUNTRY | This is the country the QAS node will use when processing address information. |
SELECTOR | The filter string in SQL-92 conditional expression syntax that restricts messages received by the node using message header or properties elements. Errors in the filter string are reported during the node's initialization. |
SENSITIVITY | The sensitivity setting the match definition uses in the job flow. |
SEPCHAR | |
SERP_CORRECT_LVR | If true, this action corrects large volume receivers. |
SERP_CORRECT_RURAL | If true, this action corrects rural addresses. |
SERP_PASS_VALID | If true, this action only passes valid addresses. |
SERVER_NAME | The server name used with the Realtime Services node. |
SHOW_FAILED_EXECS | Specifies whether to output the failed execution rows. |
SHOW_HEADER | Specifies whether to include the header row for the row log that lists the field names. |
SINGLE_MATCH_ONLY | If true, specifies that you want to stop after the first found match. |
SINGLE_REC_CLUSTERS_ONLY | Only display single record clusters. |
SINGLERECFLAG | If true, single records are flagged. |
SKIP | |
SKIP_NULL_CSZ_LINE | If true, specifies that addresses are not passed into the engine when the city/state/zip line is empty. |
SKIP_SAME | |
SKIP_SELFCOMPROWS | When set to true, the compare to self rows will be skipped from the output if one or more of the real rows pass. Default is true. |
SKIPMULTIROWS | |
SKIPROWS | The number of introductory lines to skip (integer). |
SOA_CUST_ADDRESS1 | This Advanced Property is used when you create a Statement of Accuracy. This is the first line of the customer address. Refer to CREATE_SOA. |
SOA_CUST_ADDRESS2 | This Advanced Property is used when you create a Statement of Accuracy. This is the second line of the customer address. Refer to CREATE_SOA. |
SOA_CUST_LASTLINE | This Advanced Property is used when you create a Statement of Accuracy. This is the last line of the customer address, which includes City, Province, and Postal Code. Refer to CREATE_SOA. |
SOA_CUST_NAME | This Advanced Property is used when you create a Statement of Accuracy. This is the CPC number assigned by Canada Post. Refer to CREATE_SOA. |
SOA_CUST_NUM | This Advanced Property is used when you create a Statement of Accuracy. This is the CPC number assigned by Canada Post. Refer to CREATE_SOA. |
SOA_FILENAME | Maps to CREATE_SOA. |
SORT | If true, this option sorts each cluster by Cluster ID. Default is false. |
SORT_CHUNKSIZE | Specifies the amount of data available to each sort thread (integer). |
SORT_FIELDS | The fields you have selected for sorting. |
SORT_KEEPVAR | Specifies that temporary file variable width to fixed width conversion be employed. A significantly advanced parameter that should rarely be manipulated. A Boolean that when set to true will indicate that the default behavior of the sort node will be to sort the data as variable width records. If set to false, which is the internal default, the sort node will treat records as fixed width records when a row does not contain a string or if the length of the string fields in the row are within a reasonable limit. Set to true to mimic pre-2.4 behavior. |
SORTBYTES | |
SORTED | |
SORTMERGES | When set to true the merge passes occur while data is read and sorted. Default is true. |
SORTTEMP | The SORTTEMP Advanced Property supports multiple paths separated by a semi-colon. Default is null. |
SORTTHREADS | The maximum number of threads to use during the sort operation. If not specified or zero is entered, the legacy (non-threaded) sort algorithm is used. |
SOURCE | The ID of the source node to listen for. If not supplied, events from any node will be caught, unless filtered by the other two variables. |
SOURCE_FIELD | |
SOURCE_ID | |
SOURCE_LANGUAGE | |
SOURCETYPE | If supplied, will catch events where the source node's type matches this string. The source node's type is the type of node, eg DATAFLOW. If not supplied, events from any node type will be caught, unless filtered by the other two variables.. |
SQL_OVERRIDE | Enables you to you override the binding parameters. This property can be set only from the advanced properties dialog for the Data Target (Insert) data job node. Right-click the Fields property, and then select Edit Default Value. If you set the SQL_OVERRIDE property, then use the standard properties dialog to update any property of the Data Target (Insert) node, then the SQL_OVERRIDE setting will be lost. |
SQL_PARAMETERS | The parameters used to prepare a database and the SQL statement for execution. |
SQL_PROTOTYPE | Enter an equivalent SQL statement. This statement is not executed but allows you to see the field types produced so you can compare to the stored procedure. |
SQLREF_PATH | A path to an external query object. |
SQL_STMT | The SQL statements that will be executed when the job is run. |
SRIFIELDNAME | The field that contains the surviving record ID. |
SRIPKFLAG | Indicates if the SRI field contains a primary key. |
STABLE | This option can be set to ensure that the data comes out in the same way it went in. When duplicate keys are encountered, the duplicate rows are returned in the order in which they were entered. The stable implementation is accomplished by adding a row counter key to the end of the selected key(s). The row counter is hidden from the caller and is not returned. As you've probably guessed, the addition of a unique portion to the key adversely affects BY Group and No Duplicate processing. Therefore, when the Stable feature is requested with either BY Group or No Duplicate processing, the BY Group and No Duplicate processing are delayed until post sort. |
STANDARDIZE_SOURCE | |
STANDARD_OUTPUT_RETURNED | Specifies one of the one-character standard output identifiers. For the values available in the Address Update Lookup node, see the Address Update Lookup node. |
START | The initial value for the FOR loop. |
START_DATE | This option specifies the starting date for the row's interval. |
START_ID | This option specifies the starting ID for a row log. This allows you to get a subset of the rows from the row log. |
START_NUMBER | The integer used as the starting number for the sequence. |
STATEFILE | |
STATEMENTS | The SQL statements that will be executed when you run the job. |
STATUS_FIELD | This field contains the status of the column selected by the user. When the status column is provided, the max error count is ignored since each row will have its status column. This option is not available when using bulk count. |
STDFIELD | The field containing the standardization to be applied to the data. |
STDZ_FLAG | Set this option if you want to see a true/false flag when a field is standardized. The default is false. |
STDZPARSED_DEF | |
STDZPARSED_TOKENS | |
STEP | The number to increment for each iteration of the FOR loop. |
STOPONFIRSTSUBSET | When a top level rule has a series of sub-rules, each operates on a subset of rows remaining after the previous sub-rule. |
STREAMING | This Advanced Property indicates whether the generated XQuery should disable (declare option ddtek:xml-streaming "no";) DataDirect XQuery streaming in the generated XQuery. Default is false, to disable streaming. |
STRENGTH | The value of strength is related to the collation level. There are five collation-level values:
|
STRING_BYTESPERCHAR | This Advanced Property determines how many bytes to allocate per character. The default is 4. |
SUPPRESS_ADDR_FIELDS | Suppressed the address fields in the Locate Node. |
SURV_REC_FIELD | The input field that contains the surviving record indicator. |
SURVREC_FIELDRULE_EXPRESSION | Expressions for substitution field values in the surviving record. |
SURVREC_FIELDRULE_FIELDS | Fields for substitution field values in the surviving record. |
SURVREC_ID_EXPRESSIONS | |
SURVRECID_FIELD | |
TABLE | The work table to read. It can be mapped from another node or supplied via default values. |
TABLE_NAME | The data table you are using. |
TABLENAME | The data table you are using. |
TERM_EXPRESSION | The post-processing code. |
THREADCOUNT | The number of worker threads used for processing. The default is 1 for single-threaded operation. If this is 2 or more (up to 16), the node will use THREADCOUNT worker threads to process the addresses in parallel. This can lead to significant performance increases, although memory usage is significantly increased as well. |
TIMEOUT | This determines how long to wait for a message to arrive. -1=>forever. 0=>return EOF if no message is immediately available. >0=>number of milliseconds to wait for a message (after which return EOF). |
TIMEOUT_SECONDS | The number of seconds until the connection will time out. |
TIMETOLIVE | The default length of time, in milliseconds, that a message is retained by the message provider. Defaults to 0 (as per JMS - no limit). Note that it is a good practice to set some expiration value on all messages. |
TITLE | The title for the report. |
TOT_DISPLAY_VALUES | |
TOT_MAX | |
TOT_MIN | |
TOTSCORE_FIELD | This is the field name for the Total score field. This field is optional. |
TRANSFORM_FIELD | |
TRIM_PATTERN | |
TYPE | This option represents the type of data pulled from the repository. |
TYPE_OVERRIDE | Displays a list of values in which each value corresponds to a field in the SQL statement. This list enables you to override the type of each field. Normally when we select data with SQL, we rely on the database driver to tell us the type of each field. Type Override enables you to specify a different type. For example, if you have a field that is numeric, you can override it as string. Valid values are string, integer, real, Boolean, and date. |
UNESCAPE_BACKSLASH | If set to true, and a backslash character is encountered on the line, the backslash character is removed and no interpretation is performed. This is significant if the character following the backslash is a character specified as a text qualifier or field delimiter. In this case,the character is not treated as a text qualifier or field delimiter for the purpose of breaking up the string. This might be useful if you have strings containing embedded double quotes. Default is false. For example: "AB"CD", "xyz" In this string, the second double-quote is considered the end of the first field, and the string would not be correct but if you have: "AB\"CD","xyz" If the setting is true, the first backslash character is removed and the next character (the double-quote) is ignored and the treat the whole string AB"CD is treated as the first field and xyz as the second. |
URI_PARAMETERS | The URI can contain parameters. The values defined in this matrix will be used to fill in the URI parameter values. This is a table parameter with the following columns:
|
USE_DPV | Use this setting to enable DPV processing. Default is null. |
USE_ELOT | Use this setting to presort mail in the delivery order for the specified carrier route. Default is null. |
USE_LACS | Use this setting to enable LACS processing. Default is null. |
USE_RDI | Use this setting to enable RDI processing. Default is null. |
USE_SASSORT | Use this setting to enable SAS Sorting. |
USE_SLK | Use this setting to enable SLK processing. Default is null. |
USER | The User ID for connecting to the JMS provider. |
USER_DOMAIN | Specifies an authentication domain that can be used to retrieve credentials for the current node. An alternative to specifying credentials in the USERNAME AND PASSSWORD options. If credentials are specified in the USERNAME AND PASSSWORD options, the USER_DOMAIN option is ignored. |
USERNAME | Use to specify a user name. |
USPS_PATH | |
VER_TYPE | The numerical domain version input type. This can either be a "value" or "field". |
VER_VAL | If VER_TYPE is "value" this is the numerical domain version and if it is "field" it will be the name of the parent field that contains the domain version. If the VER_TYPE is "value" and VER_VAL is NULL all domain version and their items will be listed. If VER_TYPE is "value" and VER_VAL is -1, then only the current domain version will be displayed. |
VERIFY | This setting enables address verification. The default is true. |
WATCHFIELDS | This option is used in the Cluster and Cluster Update nodes to enter field names and values and configure them as fields to watch. When the node encounters these fields and values, the corresponding row number is written into dfwfproc log file at the INFO level under logger name "DF.StepEngine.Plugin.Cluster[Update]". Note that this option can significantly impact performance for jobs with a large number of input rows or configured field and value pairs. Use this option only to answer a specific clustering question; leave it unset otherwise. |
WHITESPACE_ASIS | When selected, specifies that the white space in the XML input element value is left as-is. When deselected, the element value white space is normalized. Normalization consists of 1) changing all tabs, carriage returns, and line feeds to a blank 2) collapsing all consecutive spaces into a single space, and 3) removing all leading and trailing spaces. The default value is false. |
WORKTABLE_URI | Specifies a URI for a work table. |
WRITE_ROWS_BEFORE_READ | This option indicates the number of rows that should be written before reading from the external program STDOUT option. The default is set to 100. If set to 0, all of the input rows are written before a read is attempted. When not zero, at every multiple of the value, a read is attempted. |
WSCP_ACTION | Identifies the SOAP Action element when invoking the Web Service. |
WSCP_ADDRESS | The address of the Web service to invoke. |
WSCP_BINDING | When the specified Web Service Operation is available through multiple Bindings, the Binding to use should be specified. If the Operation is available through multiple Bindings and this option is not specified, the first supported (SOAP or REST) Binding will be used. |
WSCP_CONFG | Specifies the path to a user-defined configuration file for the Web Service node and the HTTP Request node. This file can be used to increase the time-out value, for example. See also What Can I Do About Time-Out Errors in Data Jobs with the Web Service Node or the HTTP Request Node? |
WSCP_HTTP_CONTENT_TYPE | When specified, the specified content type is included in the HTTP request headers. |
WSCP_DOMAIN | The domain to use for retrieving credentials from an authentication server when connecting to the web service. If this property is set, then the user name and password properties should not be set. |
WSCP_OPERATION | The Web Service Operation to invoke. The specified value must be a fully qualified name either in serialized qualified name format or qualified by a namespace prefix. |
WSCP_PASS | The password to use when connecting to the web service. |
WSCP_PREEMPTIVE_AUTHENTICATION | When set to TRUE along with WSCP_CLIENT_USER and WSCP_CLIENT_PASS, specifies that the HTTP client will send the basic authentication response before the server gives an unauthorized response. Default is FALSE. |
WSCP_PROXY_DOMAIN | The domain to use for retrieving credentials from an authentication server when connecting to a web service proxy server. |
WSCP_PROXY_HOST | The Web service proxy server host name. |
WSCP_PROXY_PASS | The password to use when connecting to a web service proxy server. |
WSCP_PROXY_PORT | The Web service proxy server port number. |
WSCP_PROXY_USER | The user name to use when connecting to a web service proxy server. |
WSCP_USER | The user name to use when connecting to the web service. |
WSCP_WSDL_ADDRESS | The WSDL location identifies from where the WSDL is to be read. The WSDL location can be file path or a URI. If the location is a URI, additional properties can be required to supply proxy and credential information. |
WSCP_WS-SECURITY_DOMAIN | The domain to use for retrieving credentials from an authentication server when connecting to the web service using WS Security. |
WSCP_WS-SECURITY_MUST | The Web service call WS-Security mustUnderstand attribute. |
WSCP_WS-SECURITY_PASS | The Web service call WS-Security password. |
WSCP_WS-SECURITY_USER | The Web service call WS-Security user name. |
XML | The URI/full path to the XML to process. |
XML_COLUMN | When specified in an XML Column Input Node, this is the name of the incoming column that contains the XML to process. When specified in an XML Column Output Node, this is the name of the column to receive the constructed XML. |
XML_COLUMN_MAXLENGTH | The maximum length of the XML_COLUMN. |
XML_STAGED | The full path where the converted XML should be written. When specified and the XML property contains an XML Converter URI, the converted XML is written to this location and the remaining processing occurs against the file containing the converted XML. Default is _null. |
XMLDECL | This option indicates whether the XML declaration should be written in the target XML. Default is true. |
XMLMAP | The full path to the SAS XMLMap that is to be used to generate the XQuery to convert the XML being processed. |
XMLMAP_OUTPUT | When set to true, specifies that the OUTPUT section of the SAS XMLMap will be used by all tables in the XMLMap. |
XMLMAP_RETAIN | If the SAS XMLMap defines a column that has the attribute retain="yes" specified and the referenced element repeats in the XML, this property must be set to true. If the described scenario exists and this property is not set to true, the step will fail. |
XMLMAP_TABLE | A SAS XMLMap can define multiple tables. This property specifies the name of the table to process. If omitted, the first table in the SAS XMLMap is processed. |
XMLMAP_XQUERY | The full path to where the generated XQuery should be written. |
XQUERY | The full path to the XQuery that is to be used to convert the table compatible XML into the desired XML. This is a required step. |
The value of the property. For example: NULL, true, or false.
In some cases, you will have the option to add a value. Click Add Value to add.
You might have the option to delete a value. To delete, click to select the value and click Delete Value.
Documentation Feedback: yourturn@sas.com
|
Doc ID: dfU_AdvancedProperties.html |