When more than one log
matches the resolved user ID, the authentication domain for the presented
credentials is determined by the following:
|
|||
The basic format of
the authentication server IOM URI is
iom://<host>:<port> ,
where <host> is the name of the computer executing the authentication
server and <port> is the port to contact the authentication
server. If the authenticating server is a DataFlux Authentication
Server, then the port should be specified as 21030 unless the default
server has been changed. If the authenticating server is a SAS Metadata
Server, then the port should be 8561 unless the default server has
been changed. For information about valid encodings, see the SAS
National Language Support (NLS): Reference Guide.
|
|||
Typically, BASE/AUTH_SERVER_LOC
is specified at installation time in the app.cfg file. Separating
the user name (BASE/AUTH_SERVER_USER) and password (BASE/AUTH_SERVER_PASS)
from the authenticating server location (BASE/AUTH_SERVER_LOC) enables
you to run the batch command (dmpexec) with the authenticate option
(-a) with individual credentials. For more information see the “Running
Jobs from the Command Line” topic in the DataFlux
Data Management Studio: User's Guide.
|
|||
Enables support for
single-sign on (SSO) to SAS servers that use the Integrated Object
Model (IOM) interface. The SAS Metadata Server uses the IOM interface,
for example. The default is NO (no support for single-sign on). Specify
YES to enable single-sign-on connections to SAS servers from DataFlux
Data Management Studio. Add this option to a user's ui.cfg file
or the default ui.cfg file for DataFlux Data Management Studio. There
is no reason to set this option in the configuration files for DataFlux
Data Management Servers.
|
|||
Typically, BASE/AUTH_SERVER_LOC
is specified at installation time in the app.cfg file. Separating
the user name (BASE/AUTH_SERVER_USER) and password (BASE/AUTH_SERVER_PASS)
from the authenticating server location (BASE/AUTH_SERVER_LOC) enables
you to run the batch command (dmpexec) with the authenticate option
(-a) with individual credentials. For more information see the “Running
Jobs from the Command Line” topic in the DataFlux
Data Management Studio: User's Guide.
|
|||
By default, the log
is written in the encoding associated with the locale of the process
for the executed job. For English-speaking organizations, this might
be LATIN-1 or UTF-8. If a log line contains characters that cannot
be represented in the encoding, the log line is not written to the
log file. This option enables you to assign the encoding of the job
log.
|
|||
Set this option to either
1 or a combination of letters. A setting of 1 lists the modules loaded
when the exception occurred, some information about those modules,
and the call stack that caused the error. A setting with letters can
include: m = do not show module info, V=turn verbose on, U=install
the Unhandled Exception Filter, C=install the continue Exception Filter,
f=do not install the first chance exception filter. This must be set
before starting the application of interest, as this setting is read
only at start-up.
|
|||
If you want to log
node statistics while the job is running, specify the number of milliseconds
that the software should wait between logging statistics. The higher
the frequency, the more run-time details about node execution are
logged in to the job’s log file. However, the additional collection
and logging of information affects the job’s performance.
|
|||
A Boolean that when
set to true indicates that the default behavior of the temporary data
file is to honor the variable width record indicator at temporary
data file creation time. If set to false (default), the temporary
data file sort support converts a variable width file to a fixed width
file if the record does not contain any string fields or the lengths
of the string fields in a record are within a threshold with regard
to the overhead necessary to sort variable width records. Set to true
to mimic pre-2.4 behavior.
|
|||
Note: For puddle options, the name
of the puddle is placed after 'POOLING/' (for example, POOLING/WFEJOB/MAXIMUM_PROCESSES).
If no puddle name is specified, it is globally applied to all puddles.
Here are a few puddles: WFEJOB - batch jobs on DMServer, WFESVC -
Process services on DMSERVER, APISVC - DFAPI services (in the works)
|
|||
When specified, the
number of concurrent child process launches is limited by this value.
If the current child launch request exceeds the specified value, the
launch waits until the number of launching processes is below the
specified value. If zero or not specified, there is no limit of concurrent
child launches.
|
|||
Default is no time-out.
Specifies the length of time, in seconds, the process requester should
wait for a process to become available. If zero, the requester waits
definitely. The acquire process time-out is in terms of the acquisition
of a process and the process pooling handshaking. It does not consider
the time required by the requester to complete application-level initialization.
This is a puddle option.
|
|||
Read by ui.cfg when
Customize starts. When QKB developers make numerous small changes
to files in an editor while Customize is open, Customize sends a notification
that warns that the file is being changed and provides a list of all
the definitions that are affected. To temporarily disable these notifications,
edit ui.cfg by adding CUSTOMIZE/DISABLE_FILE_NOTIFICATIONS=1.
|
|||
Possible values: dfpower82,
dmp21, and dmp22. Default: dmp22. This is for customers who want to
use the latest version of DataFlux Data Management Studio but who
want the outputs of their QKB-related Data Job nodes (for example,
matchcodes) to be exactly the same as the outputs for earlier versions).
|
|||
If used for a data job,
this option allows QKBs to load definitions as needed instead of loading
all of them at the beginning. These definitions are kept in memory
for future runs that reuse that process.
If used for a service,
each loaded service loads its own QKB. Once a service is loaded into
memory and runs at least once, it keeps the QKB loaded from the previous
run and does not have to be loaded again. Note that a QKB in memory
is not shared across different services or threads, so each initiation
of either a new service or a new thread for an existing service will
cause the QKB to be loaded. This could have an implication on memory
and performance.
|
|||
A value of UPPER sorts
uppercase letters first, then the lowercase letters. A value of LOWER
sorts lowercase letters first, then the uppercase letters. If you
do not select a collation value, then the user's locale-default
collation is selected. Linguistic collation allows for a locale-appropriate
collating sequence.
|
|||
When the option BASE/AUTH_SERVER_LOC
in app.cfg identifies a SAS Metadata Server, the DataFlux Data Management
Server retrieves and sets the following values:
If the SAS Metadata Server cannot locate a metadata
definition based on the name, then the DataFlux Data Management Server
does not start.
If any of the preceding
options have values in the DataFlux Data Management Server’s
dmservre.cfg file, then the local values override the values that
are supplied in metadata. For this reason, it is recommended that
you comment-out these options in dmserver.cfg.
To access the named
metadata definition on the SAS Metadata Server, one of two conditions
must be met. You can ensure that the process owner of the Data Management
Server has a user definition on the SAS Metadata Server. Otherwise,
the named metadata definition needs to be available to the PUBLIC
group.
|
|||
These shared libraries
are installed to support the SAP Remote Function Call node,
a data job node in DataFlux Data Management Studio. For more information,
see Installing Support for the SAP RFC Node .
|
|||
The stable implementation
is accomplished by adding a row counter key to the end of the selected
key(s). The row counter is hidden from the caller and is not returned.
The addition of a unique portion to the key adversely affects BY Group
and No Duplicate processing. Therefore, when the Stable feature is
requested with either BY Group or No Duplicate processing, the BY
Group and No Duplicate processing are delayed until post sort.
|
|||
To turn on this functionality,
go into your configuration files. To profile real-time services, update
dfwsvc.cfg. To profile batch jobs, update dfwfproc.cfg. To profile
from studio, update ui.cfg. To profile all three, update app.cfg.
The results are written to the log under the DF.RTProfiler heading
at trace level. An example of the output is:
NX,inner2.ddf,1,0,5 where
the values represent action type or operation (either NX - cumulative
time spent processing rows, PR - time preparing, or PX - time pre-executing),
job name, instance (iid field in the XML file), milliseconds, and
entries (the number of times that you have entered that code).
|
|||
To turn on this functionality,
go into your configuration files. To profile real-time services, update
dfwsvc.cfg. To profile batch jobs, update dfwfproc.cfg. To profile
from studio, update ui.cfg. To profile all three, update app.cfg.
The results are written to the log under the DF.RTProfiler heading
at trace level. An example of the output is:
NX,ARCHITECT_EMBEDDED_JOB,0,5 where
the values represent action type or operation (either NX - cumulative
time spent processing rows, PR - time preparing, or PX - time pre-executing),
node type, milliseconds, and entries (the number of times that you
have entered that code).
|