Configuration Options

Except as noted, the following configuration options can be set for both DataFlux Data Management Studio and DataFlux Data Management Server. These options are typically set in the app.cfg file.
Option
Purpose
Source
Notes
Base or General Application
DEFAULT_UMASK
On UNIX server hosts, specifies the default umask to use when creating output files, such as the output files from jobs. The umask value must be numeric. If not set, the shell's umask value is used. Example: default_umask = 002.
Optional
DataFlux Data Management Servers on UNIX only.
ODBC_INI
Overrides the location of the odbc.ini file
Optional
For DataFlux Data Management Servers on UNIX only.
BASE/APP_CONTAINER_DOMAIN
Application container authentication domain
Optional
Identifies the authentication domain expected by application container services. If not specified, “DefaultAuth” is used.
BASE/APP_CONTAINER_LOC
Application container location
Optional
Identifies where to locate the application container services. In most cases this is not required. If it is required, the value is typically an HTTP URI.
In addition, app.cfg should always set this option to point to the metadata server (iom://<host><port>).
BASE/APP_VER
Application version number
Optional
Defaults to 2.6.
BASE/AUTH_DEFAULT_DOMAIN
Default resolved identity domain
Optional
In a metadata configuration, it is possible for the authenticated credentials to resolve to a person that contains multiple logins with the same user ID.
When more than one log matches the resolved user ID, the authentication domain for the presented credentials is determined by the following:
  1. The value of the BASE/AUTH_DEFAULT_DOMAIN option if specified and the specified value matches the authentication domain of one of the logins. If not specified, or no match is found, continue to 2.
  2. Use DefaultAuth. If DefaultAuth matches the authentication domain of one of the logins, it is used as the presented credential authentication domain. If no match is found, continue to 3.
  3. Use the first matching login.
BASE/AUTH_SERVER_LOC
Location of the authenticating server
Optional
Typically, BASE/AUTH_SERVER_LOC is specified at installation time in the app.cfg file.
If specified, contains the IOM URI to an authentication server.
The basic format of the authentication server IOM URI is iom://<host>:<port>, where <host> is the name of the computer executing the authentication server and <port> is the port to contact the authentication server. If the authenticating server is a DataFlux Authentication Server, then the port should be specified as 21030 unless the default server has been changed. If the authenticating server is a SAS Metadata Server, then the port should be 8561 unless the default server has been changed. For information about valid encodings, see the SAS National Language Support (NLS): Reference Guide.
BASE/AUTH_SERVER_PASS
Identifies the password to present when connecting to the authenticating server defined by BASE/AUTH_SERVER_LOC
Typically, BASE/AUTH_SERVER_LOC is specified at installation time in the app.cfg file. Separating the user name (BASE/AUTH_SERVER_USER) and password (BASE/AUTH_SERVER_PASS) from the authenticating server location (BASE/AUTH_SERVER_LOC) enables you to run the batch command (dmpexec) with the authenticate option (-a) with individual credentials. For more information see the “Running Jobs from the Command Line” topic in the DataFlux Data Management Studio: User's Guide.
BASE/AUTH_SERVER_SSPI
Identifies support for SSPI
Optional
Enables support for single-sign on (SSO) to SAS servers that use the Integrated Object Model (IOM) interface. The SAS Metadata Server uses the IOM interface, for example. The default is NO (no support for single-sign on). Specify YES to enable single-sign-on connections to SAS servers from DataFlux Data Management Studio. Add this option to a user's ui.cfg file or the default ui.cfg file for DataFlux Data Management Studio. There is no reason to set this option in the configuration files for DataFlux Data Management Servers.
BASE/AUTH_SERVER_USER
Identifies the user name to present when connecting to the authenticating server defined by BASE/AUTH_SERVER_LOC
Optional
Typically, BASE/AUTH_SERVER_LOC is specified at installation time in the app.cfg file. Separating the user name (BASE/AUTH_SERVER_USER) and password (BASE/AUTH_SERVER_PASS) from the authenticating server location (BASE/AUTH_SERVER_LOC) enables you to run the batch command (dmpexec) with the authenticate option (-a) with individual credentials. For more information see the “Running Jobs from the Command Line” topic in the DataFlux Data Management Studio: User's Guide.
BASE/DMSTUDIO
Studio indicator
Optional
If this option is set to true (value of 1), it indicates that it is using the dmstudio process (not processes started by the application, like dfwfproc, for example). The user should not adjust this or override the value.
BASE/DATE_FORMAT
Specific date formats
Optional
If specified, it is iso8601.
BASE/EMAILCMD
Specifies the command used to send e-mail
Required
Can include %T and %B where %T is replaced with the recipient and %B is a file containing the body of the message; also used by monitor event as well as architect nodes.
BASE/EXE_PATH
Path containing executables
Optional
Calculated.
BASE/FTPGETCMD
Specifies the command used for Ftp Get Functionality
Required
Should default in the install, as follows:
  • %U: Replace with user name.
  • %P: Replace with password.
  • %S: Replace with server.
  • %T: Replace with local directory.
  • %F: Replace with Files to download, multiple separated by spaces.
  • %L: Replace with the log file to pipe the output.
BASE/FTPPUTCMD
Specifies the command used for Ftp Put Functionality
Required
BASE/FTPPUTCMD
BASE/JOB_LOG_ENCODING
Encoding for the job log on a DataFlux Data Management Server
Optional
Note: This option must be set on the DataFlux Data Management Server where jobs are executed. It has no effect on DataFlux Data Management Studio job logs.
By default, the log is written in the encoding associated with the locale of the process for the executed job. For English-speaking organizations, this might be LATIN-1 or UTF-8. If a log line contains characters that cannot be represented in the encoding, the log line is not written to the log file. This option enables you to assign the encoding of the job log.
BASE/LIBRARY_PATH
Path for Java JAR dependencies
Optional
Determined by start-up code (DFEXEC_HOME/lib).
BASE/LOGCONFIG_PATH
Full path to the log configuration file
Optional
Must be set in the configuration file or it defaults to logging.xml in the etc directory.
BASE/LOGEXCEPTIONS
Exception logging
Optional
Exception logging defaults to off.
Set this option to either 1 or a combination of letters. A setting of 1 lists the modules loaded when the exception occurred, some information about those modules, and the call stack that caused the error. A setting with letters can include: m = do not show module info, V=turn verbose on, U=install the Unhandled Exception Filter, C=install the continue Exception Filter, f=do not install the first chance exception filter. This must be set before starting the application of interest, as this setting is read only at start-up.
BASE/MACROS_PATH
Path for system macros.cfg file
Optional
If not specified, this file is located in the etc subfolder of the installation folder.
BASE/MESSAGE_LOCALE
Error message locale
Optional
If not specified, it is determined from the system locale.
BASE/MESSAGE_LEVEL
Error level of messages
Optional
0 (or not specified) - normal messages; 1 - includes source file and line number in messages.
BASE/MESSAGE_PATH
Path to the message directory
Optional
Determined by start-up code.
BASE/MONITOR_FREQUENCY
Enables the logging of job node statistics while a job is running.
Disabled by default
If this option is disabled (or its value is -1), then node statistics are logged only when the job has finished executing.
If you want to log node statistics while the job is running, specify the number of milliseconds that the software should wait between logging statistics. The higher the frequency, the more run-time details about node execution are logged in to the job’s log file. However, the additional collection and logging of information affects the job’s performance.
BASE/PLUGIN_PATH
Path used by all subsystems to find plug-ins
Optional
Determined by start-up code.
BASE/PRIMARY_LICENSE
Primary licensing method
Required by base
Must be set in the configuration file as DATAFLUX or SAS
BASE/PRIMARY_LICENSE_LOC
Location of the primary license file or server
Required by base
Must be set in the configuration file.
BASE/REPOS_DDL_LINE_PREFIX
Format the output of the DDL file that is generated for a repository from the Repository Definition dialog box
Must be set in the configuration file.
For this macro and BASE/REPOS_DDL_LINE_SUFFIX only. Specifying ^p as a value causes a line break.
BASE/REPOS_DDL_LINE_SUFFIX
Format the output of the DDL file that is generated for a repository from the Repository Definition dialog box.
Must be set in the configuration file.
For this macro and BASE/REPOS_DDL_LINE_PREFIX only. Specifying ^p as a value causes a line break.
BASE/REPOS_FREQ_CACHE_MAX
Specifies the number of frequency distributions to cache
Optional. Use only if necessary.
This option prevents memory from overflowing when trying to read a large number of values into memory for frequency distributions.
BASE/REPOS_SYS_PATH
System path for repository configuration files
Optional
Automatically determined.
BASE/REPOS_USER_PATH
User directory for repository configuration files
Optional
Automatically determined by dfcurver.
BASE/REPOS_FILE_ROOT
Overrides the root of the repository for URI lookups
Optional
If specified, this is used as the root for the repository when resolving the URI. The path in the URI is concatenated to this path to give the actual filename of a URI.
BASE/REPOS_EVENT_WAIT_QUERYMS
Repository event that processes the wait time between processing queries.
Optional
Specifies how frequently in milliseconds to query the repository for changes in the event table. This might need to be changed due to slow servers or IT issues. This is an overriding value and the default is used if no value is set by the user. A setting of -1 disables events from client.
BASE/REPOS_EVENT_CLEAN_TIMEMIN
Repository event processor that removes all events older than X minutes before start-up.
Optional
 
BASE/ROWSET_SIZE
Suggested RowSet Size.
Optional
If specified, the value calculates the maximum number of rows each rowset collection should contain.
BASE/SECONDARY_LICENSE
Secondary licensing method
Required by base
Must be set in the configuration file as DATAFLUX or SAS.
BASE/SECONDARY_LICENSE_LOC
Specifies the location of the secondary license file or server.
Required by base
Must be set in the configuration file.
BASE/SORTBYTES
Specifies the bytes to use when sorting
Optional
 
BASE/SORTMERGES
Enables merge during sort
Optional
 
BASE/SORTTEMP
Specifies the temporary path for sorts
Optional
 
BASE/SORTTHREADS
Specifies the number of sort threads
Optional
 
BASE/SORT_KEEPVAR
Specifies the temporary file variable width to fixed width conversion.
A significantly advanced parameter that should rarely be manipulated.
A Boolean that when set to true indicates that the default behavior of the temporary data file is to honor the variable width record indicator at temporary data file creation time. If set to false (default), the temporary data file sort support converts a variable width file to a fixed width file if the record does not contain any string fields or the lengths of the string fields in a record are within a threshold with regard to the overhead necessary to sort variable width records. Set to true to mimic pre-2.4 behavior.
BASE/TEMP
Temporary directory
Optional
If not specified, it inherits the value of the TEMP environment variable.
BASE/TEXTMINE_LITI_LANG_LOCATION
Doc extraction node option
Optional
This is the install location of Teragram liti files. This allows them to be in Teragram provided languages instead of in the DataFlux install.
BASE/TIME_BASE
Whether to use GMT time
Optional
If this is set to GMT (not the default), the current date returns in GMT. This affects anything that uses the current date timestamp.
BASE/UPDATE_LEVEL
Application update level
Optional
Defaults to 0. Could be used as a minor revision number.
BASE/USER_PATH
Path for user configuration files
Optional
Automatically determined by dfcurver.
Data Access Component Logging
DAC/DFTKLOGFILE
DFTK logging
Optional
Filename.
DAC/DISABLESYSCATENUM
Enumeration of SYSCAT DSNs
Optional
When set to "yes", 1, or "true," this setting disables the listing the SYSCAT type DSNs into DSNs that are on that server.
DAC/DFTKDISABLECEDA
Disables CEDA support
Optional
"Yes" turns it on.
DAC/DFTK_PROCESS
Run DFTK out of process
Optional
"Yes" turns it on; off by default.
DAC/DFTK_PROCESS_TKPATH
TKTS path for DFTK out of process
Optional
Path that defaults to a core/sasext directory off the executable directory.
DAC/DSN
DSN directory for TKTS DSNs
Optional
Path that defaults to DFEXEC_HOME/etc/dftkdsn.
DAC/SAVEDCONNSYSTEM
Location of system-saved connections
Optional
Defaults to DFEXEC_HOME/etc/dsn.
DAC/SAVEDCONNUSER
Location of user-saved connections
Optional
Defaults to the user settings folder, the folder where all of the application-specific settings supplied by a user are stored, such as the following path under Windows 7: C:\Users\[username]\AppData\Roaming\DataFlux\dac\9.x
DAC/TKTSLOGFILE
TKTS logging
Optional
Filename.
Address Update (NCOA) (in dfncoa_appcfg.h)
NCOA/AUDIT_OPERID
Specifies the operator ID for an NCOA audit run
Optional
Add this option to either ncoa.cfg or app.cfg. If you do not specify an operator ID, AuditOper is specified automatically.
NCOA/DVDPATH
Path to the unpacked and unzipped NCOA data
Required
Resides in macros/ncoa.cfg.
NCOA/QKBPARSEDEFN
Path to the QKB parse definition used for Address Update
Optional
Default is "Name (Address Update)". Resides in macros/ncoa.cfg.
NCOA/QKBPATH
Path to the QKB used for Address Update name parsing
Required
Resides in macros/ncoa.cfg.
NCOA/USPSPATH
Path to the USPS CASS/DPV/etc data
Required
Resides in macros/ncoa.cfg.
NCOA/REPOSCONNECTION
Specifies the connection string used to connect to the Address Update repository
Required
Overrides NCOA/REPOSDSN. One or the other is required. This is typically set by the Address Update Admin utility. Resides in app.cfg.
NCOA/REPOSDSN
Specifies DSN used to connect to the Address Update repository
Required
Is overridden by NCOA/REPOSCONNECTION. One or the other is required. This is typically set by the Address Update Admin utility. Resides in app.cfg.
NCOA/REPOSPREFIX
Table prefix used on the Address Update tables.
Required
This is typically set by the Address Update Admin utility. Resides in app.cfg.
NCOA/REPOSTYPE
Specifies the repository type
Required
Valid values are: 0 (Guess), 1 (ODBC), 2 (DFTK). If the value is 0, the node attempts to determine the type from the connection string. This is typically set by the Address Update Admin utility. Resides in app.cfg.
NCOA/DFAV_CACHE_SIZE
Set verify cache percentage for the USPS data.
Optional
The higher the value, the more data is cached. The faster the processing, the more memory is used. The default is 0. Resides in macros/ncoa.cfg.
NCOA/DFAV_PRELOAD
Set verify preload options for the USPS data.
Optional
Valid values are "ALL" or an empty string. Using "ALL" requires a large amount of memory. Resides in macros/ncoa.cfg.
Pooling
Note: For puddle options, the name of the puddle is placed after 'POOLING/' (for example, POOLING/WFEJOB/MAXIMUM_PROCESSES). If no puddle name is specified, it is globally applied to all puddles. Here are a few puddles: WFEJOB - batch jobs on DMServer, WFESVC - Process services on DMSERVER, APISVC - DFAPI services (in the works)
POOLING/CHILD_MAXIMUM_LAUNCHES
Throttling for launches
Optional
When specified, the number of concurrent child process launches is limited by this value. If the current child launch request exceeds the specified value, the launch waits until the number of launching processes is below the specified value. If zero or not specified, there is no limit of concurrent child launches.
POOLING/GET_PROCESS_TIMEOUT
Acquire process time-out
Optional
Default is no time-out. Specifies the length of time, in seconds, the process requester should wait for a process to become available. If zero, the requester waits definitely. The acquire process time-out is in terms of the acquisition of a process and the process pooling handshaking. It does not consider the time required by the requester to complete application-level initialization. This is a puddle option.
POOLING/IDLE_TIMEOUT
Idle process time-out
Optional
Default is 0. Specifies the length of time, in seconds, a process remains idle before it is terminated. If zero, idle processes are not terminated. This is a puddle option.
POOLING/MAXIMUM_ERRORS
Maximum number of pooled process errors before process is terminated
Optional
Default is 0 (never terminate it). This controls how many times a process can fail (when it is reused for something else) before it is terminated. This is a puddle option.
POOLING/MAXIMUM_PROCESSES
Maximum number of concurrent pooled processes
Optional
If 0, the number of concurrent pooled processes is unlimited. Default is unlimited. If POOLING/GET_PROCESS_TIMEOUT is set, it waits for that amount of time to get a new process if needed. This is a puddle option.
POOLING/MAXIMUM_USE
Maximum number of pooled process uses before process is terminated.
Optional
Default is 0 (unlimited). The maximum number of times a pooled process can be used. After the pooled process has been used the specified number of times, it is terminated. This is a puddle option.
Process Flow
WFE/CANCEL_TIMEOUT
Amount of time to give remote processes to cancel in milliseconds
Optional
When user selects cancel, this is the amount of time to wait for remote nodes to exit gracefully before terminating them.
WFE/ENGINE_THREAD_LIMIT
Specifies the thread pool limits for the workflow engine.
Optional
Use this setting to limit the number of engine threads. The default is 0, meaning unbounded, which defers to the system for the thread pool limits. The optimal setting is the number of processors + 1.
WFE/STATUS_FREQUENCY
How frequently to update status
Optional
The default is 250 milliseconds. This is how long to wait before obtaining status from a remote node. Setting to -1 disables polling for status (which might yield better performance).
Profile
PROF/DEBUG_MODE
Frequency distribution engine debug mode
Optional
Possible values include 0, not debug mode, or 1 debug mode. The default is not debug mode. The log is located at C:\Documents and Settings\<USER ID>\Local Settings\Temp.
PROF/LOCK_RETRIES
SQLite repository connection attempts
Optional
Specifies the number of times to retry SQLite repository connection when a connect attempt times out or -1 to retry until a connection is established.
PROF/PER_TABLE_BYTES
Frequency distribution engine per table bytes
Optional
Any numeric value. Default is -1 (frequency distribution engine default).
QKB
CUSTOMIZE/DISABLE_FILE_NOTIFICATIONS
Temporarily disables notifications
Optional
Read by ui.cfg when Customize starts. When QKB developers make numerous small changes to files in an editor while Customize is open, Customize sends a notification that warns that the file is being changed and provides a list of all the definitions that are affected. To temporarily disable these notifications, edit ui.cfg by adding CUSTOMIZE/DISABLE_FILE_NOTIFICATIONS=1.
QKB/ALLOW_INCOMPAT
Allow Data Jobs to run even when the software detects that a QKB definition invoked by the job was saved by a version of the software that is later than the current version of the software.
Optional
Default is NO. The default behavior is for these definitions to fail to load. Results obtained when this option is turned on are undefined.
QKB/COMPATVER
Tells DataFlux Data Management Studio which version of Blue Fusion to use when running a data job.
Optional
Possible values: dfpower82, dmp21, and dmp22. Default: dmp22. This is for customers who want to use the latest version of DataFlux Data Management Studio but who want the outputs of their QKB-related Data Job nodes (for example, matchcodes) to be exactly the same as the outputs for earlier versions).
QKB/ON_DEMAND
Loads QKB definitions on demand
Optional
Default is YES. The application start-up creates a Blue Fusion pool that sets the option for all consumers (Profile, Explorer, and Nodes) with the exception of the Expression Engine, which has its own initialization.
Set this option to No to find errors within definitions and to see error messages specific to Pattern Logic nodes.
If used for a data job, this option allows QKBs to load definitions as needed instead of loading all of them at the beginning. These definitions are kept in memory for future runs that reuse that process.
If used for a service, each loaded service loads its own QKB. Once a service is loaded into memory and runs at least once, it keeps the QKB loaded from the previous run and does not have to be loaded again. Note that a QKB in memory is not shared across different services or threads, so each initiation of either a new service or a new thread for an existing service will cause the QKB to be loaded. This could have an implication on memory and performance.
QKB/PATH
Path to QKB
Required by QKB products
Path is set to the default QKB defined in application.
QKB/SURFACEALL
Surfaces all definitions in the Data Job interface, even definitions for which the "Surface" flag is unchecked in Customize.
Optional
Default is NO. Note that the application start-up creates a Blue Fusion pool that sets the option for all consumers (Profile, Explorer, and Nodes) with the exception of the Expression Engine, which continue to have its own initialization.
Architect Client (UI) settings
ARCHITECT/AutoPassThru
Client option to set mappings
Optional
Maintained by client; choices are 0 (target), 1 (Source and Target), and 2 (All).
Architect nodes, and so on (Defined in ids.h)
CLUSTER/BYTES
Specifies the bytes use when clustering
Optional
 
CLUSTER/LOG
Specifies whether a clustering log is needed
Optional
 
CLUSTER/TEMP
Specifies the cluster temporary path
Optional
 
FRED/LOG
Specifies whether a FRED log is needed
Optional
 
JAVA/CLASSPATH
Specifies the Java classpath
Optional
 
JAVA/DEBUG
Specifies the Java debug options.
Optional
 
JAVA/DEBUGPORT
Specifies the port to remotely debug Java.
Optional
 
VERIFY/PRELOAD
Preloads defined state data into memory for address verification.
Optional
Set in the app.cfg file. Values can be ALL, a two-letter state abbreviation, or multiple state abbreviations separated by spaces.
VERIFY/USELACS
Enables or disables the LACSLink processing
Optional
Locatable Address Conversion System (LACS).
VERIFY/USEELOT
Enables or disables the eLOT processing
Optional
 
VERIFY/USPS
Specifies the USPS data path
Required by USPS
Maintained by USPS installation.
VERIFY/UPSPINST
Determines whether the USPS data is installed or if sample data is being used
Required
Maintained by USPS installation.
VERIFYINTL/CFG
Verifies the international addresses
Required by international verification
Path maintained by component installation.
VERIFYWORLD/CONFIGFILE
Specifies the path to the SetConfig.xml file that is used by the Address Verification (World 2) data job node. Use this option to change the default location of this file.
Required by Address Verification (World 2) data job node
For more information about this file, see the Address Verification (World 2) node in the DataFlux Data Management Studio Users Guide.
VERIFYWORLD/DB
Specifies the Platon data path
Required for Platon
Path maintained by component installation.
VERIFYWORLD/UNLK
Specifies the Platon library universal unlock code
Required for Platon
Path maintained by component installation.
WEBSERVICE/CONFIG_FILE
Specifies a user-defined configuration file for the Web Service node and the HTTP Request node. This file can be used to increase the time-out value, for example.
Optional
For more information about the user-defined configuration file, see the FAQ topic: "What Can I Do About Time-Out Errors in Data Jobs with the Web Service Node or the HTTP Request Node?" in the DataFlux Data Management Studio: User's Guide.
dfIntelliServer
DFCLIENT/CFG
Used for dfIntelliServer
Required
Maintained by dfIntelliServer installation; typical location is 'C:\Program Files\DataFlux\dfIntelliServer\etc\dfclient.cfg; modify the dfclient.cfg file to point to the server and port.
Repository
REPOS/CREATE_SPEC_PATH
Specifies how to create the repository table or index
Optional
This specification provides a means of configuring the commands to create tables and indexes in the repository.
REPOS/FORCE_FILE_BASED
Repository SQLite usage
Optional
If set to true, all SQLite access goes through dfsqlite instead of DAC.
REPOS/LOCK_RETRIES
Specifies the number of attempts to connect to a SQLite repository
Optional
Number of times to retry SQLite repository connection when a connect attempt times out or -1 to retry until a connection is established.
REPOS/TABLE_LIST_PATH
Repository XML table definition
Optional
The directory that should contain XML files for any tables the repository library should add on creation or update. If set, look here for XML files that contain repository table definitions; if not set, look in DFEXEC_HOME/etc/reposcreate.
Other
BY_GROUP
Specifies to sort the job by the selected group
Optional
When a key differs, an indicator is returned informing the caller that the retrieved row begins a new group with the same key. This is useful when clustering by a single key or similar processing.
COLLATION
Specifies how things are collated
Optional
A value of UPPER sorts uppercase letters first, then the lowercase letters. A value of LOWER sorts lowercase letters first, then the uppercase letters. If you do not select a collation value, then the user's locale-default collation is selected. Linguistic collation allows for a locale-appropriate collating sequence.
DMSERVER/NAME
Specifies the name of the metadata definition of the DataFlux Data Management Server that is stored on the SAS Metadata Server. When the DataFlux Data Management Server is started, it uses the name to query the SAS Metadata Server for configuration information.
This option is ignored when BASE/AUTH_SERVER_LOC identifies a DataFlux Authentication Server rather than a SAS Metadata Server.
For DataFlux Data Management Servers only.
This option is specified by default when the DataFlux Data Management Server is installed as part of SAS Visual Process Orchestration.
When the option BASE/AUTH_SERVER_LOC in app.cfg identifies a SAS Metadata Server, the DataFlux Data Management Server retrieves and sets the following values:
  • DMSERVER/SOAP/LISTEN_HOST
  • DMSERVER/SOAP/LISTEN_PORT
  • DMSERVER/SOAP/SSL
  • DMSERVER/SECURE
If the SAS Metadata Server cannot locate a metadata definition based on the name, then the DataFlux Data Management Server does not start.
If any of the preceding options have values in the DataFlux Data Management Server’s dmservre.cfg file, then the local values override the values that are supplied in metadata. For this reason, it is recommended that you comment-out these options in dmserver.cfg.
To access the named metadata definition on the SAS Metadata Server, one of two conditions must be met. You can ensure that the process owner of the Data Management Server has a user definition on the SAS Metadata Server. Otherwise, the named metadata definition needs to be available to the PUBLIC group.
EXPRESS_MAX_STRING_LENGTH
Specifies the maximum size of strings declared in expression nodes
Optional
Default maximum length of any string in this node is 5,242,880 bytes (5MB). This enables specifying a larger value in bytes. If performance issues arise, the suggested setting is 65536 bytes.
EXPRESSION/UDFDIR
Specifies where to look for UDF files
Optional
If not specified, look for UDF files in installationdir/etc/udf.
JAVA/COMMAND
Command used to launch Java
Optional
Default is Java. This is the command used to launch the Java proxy process. The Java command must be compatible with launching from the command line.
Here are some examples: JAVA/COMMAND = javaJAVA/COMMAND = java -Djavax.net.ssl.trustStore=C:\Store\jssecacertsJAVA/COMMAND = java -Djavax.net.ssl.trustStore="C:\Cert Store\jssecacerts"JAVA/COMMAND = "C:\Program Files\Java\jre6\bin\java"
MDM/REPOSITORY_ROOT_FOLDER
 Allows the foundations/master_data to be overwritten by the end user when putting the contents of [INSTALL_ROOT]/share/mdm into a repository.
Optional
Name and location of the root folder for Master Data Management within a repository.
MONITOR/BULK_ROW_SIZE
Specifies the number of rows in a bulk load.
Default value of a bulk load is 1000. This value can be changed to enhance the performance of jobs that monitor business rules when those jobs include row-logging events.
MONITOR/DUMP_JOB_DIR
Specifies a directory to store temporary jobs created by the business rule monitor.
Optional
By default, this option is not set and the Monitor does not store temporary jobs on disk.
NODUPS
Specifies no duplicates.
When duplicate keys are encountered, only one of the rows containing the duplicate key is returned to the caller.
SAP_LIBPATH
Specifies the location of SAP RFC libraries on UNIX only.
Optional
These shared libraries are installed to support the SAP Remote Function Call node, a data job node in DataFlux Data Management Studio. For more information, see Installing Support for the SAP RFC Node .
STABLE
Ensures that when duplicate keys are encountered, the duplicate rows are returned in the order in which they were entered.
The stable implementation is accomplished by adding a row counter key to the end of the selected key(s). The row counter is hidden from the caller and is not returned. The addition of a unique portion to the key adversely affects BY Group and No Duplicate processing. Therefore, when the Stable feature is requested with either BY Group or No Duplicate processing, the BY Group and No Duplicate processing are delayed until post sort.
STEPENG/PROFILEBYNODE
Specifies the performance profiler by node instance.
Use only for design and testing. Do not use in a production environment
When set to Yes, this provides each node instance and how many milliseconds were spent on each operation (prepare, pre-execute, execute), and how many times each was entered. The ID corresponds to the iid field in the job’s XML file, and includes the job name so that you can see embedded jobs.
To turn on this functionality, go into your configuration files. To profile real-time services, update dfwsvc.cfg. To profile batch jobs, update dfwfproc.cfg. To profile from studio, update ui.cfg. To profile all three, update app.cfg. The results are written to the log under the DF.RTProfiler heading at trace level. An example of the output is: NX,inner2.ddf,1,0,5where the values represent action type or operation (either NX - cumulative time spent processing rows, PR - time preparing, or PX - time pre-executing), job name, instance (iid field in the XML file), milliseconds, and entries (the number of times that you have entered that code).
STEPENG/PROFILEBYTYPE
Specifies the performance profiler by node type.
Use only for design and testing. Do not use in a production environment
When set to Yes, this setting provides you each node type along with how many milliseconds were spent on each of three operations (prepare, pre-execute, execute), and how many times each was entered.
To turn on this functionality, go into your configuration files. To profile real-time services, update dfwsvc.cfg. To profile batch jobs, update dfwfproc.cfg. To profile from studio, update ui.cfg. To profile all three, update app.cfg. The results are written to the log under the DF.RTProfiler heading at trace level. An example of the output is: NX,ARCHITECT_EMBEDDED_JOB,0,5where the values represent action type or operation (either NX - cumulative time spent processing rows, PR - time preparing, or PX - time pre-executing), node type, milliseconds, and entries (the number of times that you have entered that code).