Concepts |
Jobs and services that are run on a DataFlux Integration Server fulfill separate needs. Use jobs to access larger data sets in batch mode, when a client application is not waiting for a response. Use services and small data sets in real time, when clients await a response from the server.
To create jobs and services for your DataFlux Integration Server, use the DataFlux dfPower Profile and DataFlux dfPower Architect applications. Create jobs to analyze the quality of your data with DataFlux dfPower Profile software. Profile jobs and their output can be stored either in a file format or in the DataFlux unified repository. The DQSRVPROFJOBFILE function executes a profile job that has been stored as a file format job. The DQSRVPROFJOBREP function executes a profile job that has been stored in the repository.
Use DataFlux dfPower Architect software to create jobs and services using a drag-and-drop interface. You can trigger the DataFlux dfPower Architect jobs with the function DQSRVARCHJOB. You can run DataFlux dfPower Architect services with PROC DQSRVSVC.
Jobs and services that run on the DataFlux Integration Servers generate information that is written to the log. You can read the server logs using the DQSRVADM procedure. Based on the information returned by the DQSRVSTATUS function, you can terminate jobs using the DQSRVKILLJOB function.
Running Jobs and Services on a DataFlux Integration Server |
Follow these steps to run jobs and services on a DataFlux Integration Server.
In the z/OS operating environment, for DataFlux Integration Servers version 8.1.1 or newer, configure your DataFlux Integration Server to use the Wireline protocol. This is described in the DataFlux Integration Server: User's Guide. The Wireline protocol improves data transfer performance.
In the z/OS operating environment, reconfigure your SAS session to use the Wireline protocol, as described in Configure Your SAS Session for Data Quality section, earlier in this chapter.
Create jobs and services using the DataFlux dfPower Profile and DataFlux dfPower Architect software.
Upload the jobs to the DataFlux Integration Server using the DataFlux Integration Server Manager.
Create and run the SAS programs that execute or trigger the jobs and services on the DataFlux Integration Server.
To run jobs and services, you do not need to load a Quality Knowledge Base onto your local host. The DataFlux Integration Server handles all interactions with your Quality Knowledge Bases.
See: The DQSRVADM Procedure and The DQSRVSVC Procedure.
DataFlux Integration Server Passwords |
If security has been implemented on your DataFlux Integration Server, include user names and passwords in the procedures and function calls that access that server. Specify the passwords directly, in plain text, or as encoded passwords. SAS recognizes encoded passwords and decodes them before it sends the passwords to the DataFlux Integration Server.
The following example shows how to encode a password and use that password in a call to the DQSRVSVC procedure:
/* Encode password in file. */ filename pwfile 'c:\dataEntry01Pwfile'; proc pwencode in='Oe3s2m5' out=pwfile; run; /* Load encoded password into macro variable. */ data _null_; infile pwfile obs=1 length=l; input @; input @1 line $varying1024. l; call symput ('dbpass', substr(line,1,l)); run; /* Run service on secure DataFlux Integration Server */ proc dqsrvsvc service='cleanseCorpName' host='entryServer1' userid='DataEntry1' password="&dbpass" data=corpName out=corpNameClean; run;
PROC PWENCODE concepts, syntax, and examples are documented in the Base SAS Procedures Guide.
Copyright © 2010 by SAS Institute Inc., Cary, NC, USA. All rights reserved.