Guidelines for Processing Data with User-Written Staging Transformations

About Processing Data with User-Written Staging Transformations

The user-written staging transformation enables you to create your own staging code. You can process data from any data source that SAS can read and you can generate user-defined staged tables. These user-defined staged tables can then be used as the source tables for additional transformations, including the Aggregation and Information Map transformations, in preparation for reporting on the data in those tables.
These topics explain the process of developing a user-written adapter to be used with the SAS IT Resource Management 3.2 client. The method described in the following topics is not the only way to create an adapter. You can write an adapter in any way that you want. However, this method can help you follow the same standards that are used for the supplied SAS IT Resource Management 3.2 adapters. Following these standards keeps your adapter consistent with other adapters. You can use some or all of the following processes as needed.

Staging Data with User-Written Staging Transformations

Overview of Staging Data with User-Written Staging Transformations

The staging step reads data from a raw data source and makes SAS staged tables. These staged tables are then used as the source to the Aggregation transformation. The User-Written Staging transformation is provided in the SAS IT Resource Management client to help in the development of the staging process. This transformation enables staged table developers to point to their own SAS code that can read the raw data and create the SAS tables. In addition, the User-Written Staging transformation supplies a set of macros and macro variables that provide information from the metadata that can be useful when creating the target tables.
The following topics discuss best practices or guidelines for staging the data with user-written staging transformations, aggregating the data processed by those transformations, creating information maps for the aggregated tables of data, and reporting on that data.
For more information about staged tables, see About Staging the Data.

How to Stage Data with User-Written Staging Transformations

Use the following steps as a guide to staging the data with the User-Written Staging transformation.
  1. Design the adapter.
    To accomplish this step, you need to thoroughly understand the raw data. You also need to determine what your staged tables should look like. For example, you should determine the names of the staged tables, the column names for each table, the attributes of the columns, and so on.
  2. Create an IT data mart.
    On the IT Data Marts tab create an IT data mart for your adapter. For information, see Create an IT Data Mart.
  3. Set up the metadata for your staged tables.
    Create the folder for your staged tables and libraries.
    1. Right-click the IT data mart that you just created. Select Newthen selectNew Folder. Create the folder. The name of the folder should be the name of your adapter.
    2. Right-click the new folder that you just created and select Newthen selectNew Folder. Create a folder called Staging.
    Create a library and its metadata for your staged tables.
    1. Right-click the Staging folder and select Newthen selectLibrary. The New Library Wizard opens.
    2. Select SAS Base Library as the type of library that you want to define. On the next page of the wizard, specify a name such as Staging for the new library.
    3. On the next page of the wizard, choose SASITRM.
    4. On the next page of the wizard, specify the libref that is associated with the staged table library.
      For best results, specify the name of the libref as stgxxxx, where xxxx is a random number to ensure that the libref is not used elsewhere.
      In the Path Specification box, click New to add a new path.
      Tip
      Use the root path that you specified for the IT data mart that you created and add an identifier to the end of that path. The identifier could be the folder, directory, or name of the libref, according to the standards of your operating environment. For best results, use the same name as the libref you just specified. For example, on Windows, if your root path is C:\DataMarts\MyAdapter, and your libref is stg1357, then specify this library path as C:\DataMarts\MyAdapter\stg1357.
    5. Continue to follow the prompts of the wizard to create the appropriate metadata for the library.
  4. On your operating system, create the physical folder for the library. (The previous step created only the metadata for the library.)
  5. Write the SAS code to read the raw data and create SAS tables as output. The complexity of this step varies depending on the raw data that you are reading and the tables that you are creating.
    Tip
    This code can be any SAS program that creates one or more permanent SAS data tables as output. These tables should be put in the location that you created in step 4. It will simplify your work later if you use the same libref that you specified in step 3.
    For best results, consider adding the following columns to your staged tables.
    • DATETIME
      In the supplied ITRM adapters, the staged tables are date and time based. All the tables contain a column called DATETIME to hold this information. Using this column name enables you to use the supplied formulas and the duplicate and future checking processes without modification. Its attributes are as follows:
          attrib DATETIME length = 8
                     	format = NLDATM.
                     	label = 'Datetime'; 
      Note: DATETIME is the starting date and time of interval data. If the raw data provides the ending date and time, you should subtract the interval duration from the date and time in order to calculate the starting date and time.
    • LSTPDATE
      In order to take advantage of the duplicate checking process that is available in SAS IT Resource Management aggregations, include a column called LSTPDATE in your staging tables. This column should be set to the current date and time: LSTPDATE = datetime();
      Its attributes are as follows:
          attrib LSTPDATE length = 8
                     	format = NLDATM.
                     	label = 'LastProcessDate'; 
    • DURATION
      Interval-type data is data that is captured or snapshot at given intervals. For interval type data, supplied adapters include a field called DURATION. This field is set to the duration of each interval in seconds. It is useful as the weight variable in an aggregation when creating weighted statistics. Its attributes are as follows:
          attrib DURATION length = 8
                     	format = TIME8. 
                     	label = 'Duration';
    Assign an appropriate format and label to your other columns. (You can also do this at a later time in the SAS IT Resource Management client in steps 9.k through 9.m.)
    Note: It is not necessary to create aggregation variables such as Hour, Shift, WeekDate, and so on. You can add these columns in step 9.l.
  6. Run the program to create the staged tables.
  7. On the SAS IT Resource Management client, register your staged tables.
    How to register your staged tables.
    1. On the SAS IT Resource Management client, right-click the staging library that you created in steps 3c through 3g, and select Register Tables. The Register Tables wizard opens.
    2. On the first page of the wizard, you do not need to make any changes. Your library is listed on that page.
    3. The tables that you created in step 6 are listed. Click Select All Tables to register all the tables in metadata.
    4. Continue to follow the prompts of the wizard to complete the registration.
  8. If the input to the User-Written Staging transformation is a SAS data set or another type of database table, then create the metadata for it.
    Tip
    One method that you can use to create the necessary metadata for your input is to create a library in metadata that points to your source data. Then select Filethen selectRegister Tables to register the tables in metadata.
  9. Create a staging job to populate the staging tables.
    How to create a staging job.
    1. Right-click the Staging folder in your IT data mart and select Newthen selectJob.
    2. Give the job a name such as Staging.
    3. From the Transformations tree, expand ITRM Transformations. Then drag the User-Written Staging transformation onto the job that you just created in the Diagram tab of the Job Editor window.
    4. Drag the staging tables from the Staging folder in your IT data mart onto the job.
    5. Connect the User-Written Staging transformation to each of the staging tables so that the tables are outputs of the transformation.
    6. If you created input metadata in step 8, then drag the input table to the job. Connect the table as input to the User-Written Staging transformation.
    7. Right-click the User-Written Staging transformation and select Properties. From the drop-down list, select the Staging Parameters tab. The Staging page appears.
      1. On the Staging page, specify the path to the raw data that you want to process. A macro variable called Rawdata will be created in the generated code with this value.
        On this page, you can also specify duplicate checking and future data options.
      2. Select the Duplicate Checking option in the left panel. The Duplicate Checking page appears. It provides a set of parameters that will be used if you decide to implement the duplicate checking process. For information about how to implement duplicate checking of your data, see Duplicate Checking.
      3. Select the User-Written option in the left panel. The User-Written page appears. The first parameter on this page enables you to specify the path to your SAS staging program. A value for this parameter is required.
        The other parameters enable you to specify what macros and macro variables you want the transformation to generate for your SAS program.
      Note: For information about these properties, see Step 8 of the “Create a User-Written Staging Transformation” topic .
      Tip
      You can click the Reset to defaults option at the top of the page to reset all the parameters in that subgroup to the values that were established when SAS IT Resource Management was installed. Click the Reset option next to any one of these parameters to reset it to the value that was established when SAS IT Resource Management was installed. Also note that Reset to defaults does not reset a parameter that is dependent on another parameter if the parent parameter does not change.
    8. Standard SAS IT Resource Management formulas are provided to compute columns such as WEEKDATE and SHIFT. If you want to use any of these formulas, double-click the tables in the job that is opened in the Diagram tab of the Job Editor window. Then select the Columns tab.
    9. On the Columns tab, you can add the standard SAS IT Resource Management date columns and their formulas.
      The following table lists the standard columns.
      Table of Standard SAS IT Resource Management Columns
      Column Name
      Length
      Format
      ITRM Formula
      Description
      DATETIMEFIFTEENMIN
      Length: 8
      Format: NLDATM.
      DatetimeFifteenMinute
      DatetimeFifteenMinute
      DATETIMEFIVMIN
      Length: 8
      Format: NLDATM.
      DatetimeFiveMinute
      DatetimeFiveMinute
      DATETIMEONEMIN
      Length: 8
      Format: NLDATM.
      DatetimeOneMinute
      DatetimeOneMinute
      DATETIMETENMIN
      Length: 8
      Format: NLDATM.
      DatetimeTenMinute
      DatetimeTenMinute
      DATETIMETHIRTYMIN
      Length: 8
      Format: NLDATM.
      DatetimeThirtyMinute
      DatetimeThirtyMinute
      DATETIMETWENTYMIN
      Length: 8
      Format: NLDATM.
      DatetimeTwentyMinute
      DatetimeTwentyMinute
      DAYDATE
      Length: 8
      Format: NLDATE10.
      DayDate
      DayDate
      DAYOFMONTH
      Length: 8
      Format: 2.
      DayOfMonth
      DayOfMonth
      DAYOFWEEK
      Length: 8
      Format: WKD.
      DayOfWeek
      DayOfWeek
      HOUR
      Length: 8
      Format: 2.
      Hour
      Hour
      MONTHDATE
      Length: 8
      Format: NLDATE10.
      MonthDate
      MonthDate
      SHIFT
      Length: $1
      Format: $CHAR1.
      Shift
      Shift
      TIME
      Length: 8
      Format: TOD5.
      Time
      Time
      WEEKDATE
      Length: 8
      Format: NLDATE10.
      WeekDate
      WeekDate
      YEARDATE
      Length: 8
      Format: NLDATE10.
      YearDate
      YearDate
      Tip
      It is easier to add these computed columns to the metadata using the SAS IT Resource Management client than to include them in the physical table that you used to register the table. If you include them when you register the table, then they will all be created as “Data Columns” instead of “Computed Columns.” In that case, you must delete the columns and then recreate them as “Computed Columns.”
      From the Columns tab, you can import columns from another source or add new columns.
      Import columns to your staged table.
      1. Click the Import Columns icon.
        Tip
        The Import Columns icon is next to the New button.
      2. In the Import Columns dialog box, expand Adapters. Then drill down to any standard adapter template table.
        Tip
        One of the SAR adapter template tables is a good choice because these tables are small.
      3. Choose the columns that you want to include in your staging tables. For information about available columns, see the preceding list of columns.
      4. Click OK. The columns are added to your table with the appropriate attributes and formulas.
      Add new columns to your staged table.
      1. Click the New button.
      2. You can add data and computed columns to your staged table. For information, see Columns in Staged Tables.
      3. If you add computed columns, the automatically generated macro called _ITMS_computedColumns can be used in your SAS program to calculate the values of these columns.
        CAUTION:
        When creating a computed column, do not base it on another computed column, because the code might not calculate the columns in the correct order.
        For example, if you are computing column A as 1+1, and you want another computed column called B that is A+5, then the expression for B should be (1+1)+5, not A+5.
        Expressions can be simple or complex.
        • A simple expression is only the content on the right side of an equation without a semicolon.
        • A complex expression contains multiple statements including the necessary semicolons. The following code is an example of a complex expression.
          if 1/A (A ne 0) then B=1/A; else B=0; 
          This code prevents division by zero, which causes errors when the code runs.
  10. Save the job and deploy the SAS code. To do so, right-click the job and select Schedulingthen selectDeploy. For more information, see Deploy a Job for Scheduling.
  11. To make your SAS program more robust, you might want to use some of the macro variables and macros that are provided by SAS IT Resource Management. These macros and macro variables enable your program to handle changes in metadata without changing the program. The following topics discuss some of the changes that you can make to your program.
For examples of how macros and macro variables can be used, see Example of Modifying the Code .

Duplicate Checking

A set of macros is provided to check for duplicate data. These macros are used in all the supplied adapters to handle reading and saving duplicate data. If your SAS program finds data in the raw data that has already been read, then the data is handled based on the setting of the Duplicate checking option (on the Duplicate Checking section of the Staging Parameters tab):
  • Discard—delete the duplicate data from the input stream
  • Terminate—stop processing
  • Force—ignore any duplicate data and continue processing
  • Inactive—do not perform any duplicate data checking. This option permits the duplicate data to be processed.
You can use some of the helper macros from the User-Written Staging transformation as well as the standard duplicate-data checking macros to handle duplicate-data checking. For more information about duplicate-data checking macros, see Duplicate-Data-Checking Macros. To implement duplicate checking in your code, perform the following steps.
  1. In the DATA statement of the DATA step that reads the raw data, add this macro call:
    %RMDUPDSN(SOURCE=USR)
    This creates a work data set with duplicate information. For example, suppose that the following is the current data statement:
    data stage.stageTable1 	
         stage.stageTable2;
    You need to change it to this:
    data stage.stageTable1 	
         stage.stageTable2 	
         %RMDUPDSN(SOURCE=USR);  
  2. At the beginning of the DATA step, add this macro call:
    %_ITMS_dupInit;
    This initializes the duplicate-data checking process.
  3. On the SET or INFILE statement, be sure to include the END= option. This option should point to the ENDFILE variable specified on the Duplicate Checking section of the Staging Parameters tab of the User-Written Staging Properties dialog box.
  4. In the DATA step. you also need to add this macro call:
    %_ITMS_dupCheck; 
    This macro runs the %RMDUPCHK macro, which performs the duplicate-data checking. The parameters for the %RMDUPCHK macro are specified on the Duplicate Checking section of the Staging Parameters tab of the User-Written Staging Properties dialog box. You can call this macro multiple times in your code if the TIMESTMP column is set in different places.
    Tip
    Put this macro call in the DATA step at any point after the TIMESTMP column has a value. TIMESTMP is a parameter in the %RMDUPCHK macro that specifies the date and time column name. (Typically, this column is DATETIME, but you can call it anything you want.) Specify the column to use on the Staging Parameters tab of the User-Written Staging Properties dialog box.
  5. Following the DATA step that ran the _ITMS_dupCheck macro, add this macro call:
    %RMDUPUPD;
    This macro completes the duplicate checking process by adding the new date and time stamps from the data that was just read to the duplicate checking tables that are kept in the staging library.

Future Checking

A set of macros is provided to check for data that has a date and time stamp that is in the future. (This situation probably means that the data is invalid). These macros assume that your TIMESTMP variable (as described in the previous duplicate checking section) is called DATETIME.
To implement future checking, you can use a User-Written Staging transformation helper macro as well as the standard future processing macros:
  • In the DATA step that you use to read the raw data, add this macro call after the DATETIME column has been set:
    %RMSETFTR;
    Note: You can call this macro multiple times if the DATETIME column is set in multiple places.
  • After the DATA step that ran the %RMSETFTR macro, add this macro call:
    %_ITMS_futureData;
    This macro invokes the %RMFUTURE macro for all your staging tables.

Error Checking

If you do error checking in your program, you can take advantage of the abort routine that is provided at the end of the generated code. If you set the macro variable &trans_rc to a value greater than 4, then the program will be cancelled and set its return code to 8 (abort cancel 8;).
To set the &trans_rc and the &job_rc error flags outside of a PROC step or DATA step, you can use the %RCSET(ERROR) macro. To set those flags inside a DATA step, you can use the %RCSETDS(ERROR) macro.
The error parameter for these macros is the error code that you want to specify. For example, you can use the following code to set a return code of 8.
%rcSet(8);
You can use the following code to set the return code to the error code from the proceeding DATA step or PROC step.
%rcSet(&syserr).;
The macro definitions for %RCSET and %rRCSETDS can be found at the beginning of the generated code.

LIBNAME Statements

All the LIBNAME statements for the source and target tables will be generated for you. Therefore you do not need to include LIBNAME code in the user-written code. However, if you are using tables that are not in the metadata as source or target tables, then you should issue your own LIBNAME statements.

Formats

To access standard SAS IT Resource Management formats, set the FMRSEARCH option as follows:
options fmtsearch=(admin.formats sashelp.itms_formats);
Your own formats can be put in the ADMIN.FORMATS catalog or in your own catalog. If you use a different catalog, you need a LIBNAME statement for that library. You then need to set the FMTSEARCH option to point to your formats. For example:
options fmtsearch=(admin.formats mylib.formats sashelp.itms_formats);
The NOFMTERR option will be set for you so that processing does not fail if there is a missing format.

Helper Macros and Macro Variables

A set of macros and macro variables can be created for you in the generated code. On the Staging Parameters tab of the User-Written Staging Properties dialog box, select User-Written to access these options. For information about the macro variables and macros that are available, see Macro Variables and Macros.
Tip
Set all the options on the macro variables to Yes, and then generate the code. You can then see what macros and macro variables are available and determine whether you want to use them. You can then turn off the macros and macro variables that you do not need.
On the User-Written section of the Staging Parameters tab of the User-Written Staging Properties dialog box, you can also specify the creation of the Tabinfo and Varinfo tables. For information about these tables, see Tabinfo and Varinfo.

Example of Modifying the Code

The following two examples show a simple user-written staging program before it is modified, and then after it is modified to include the helper macros.
Original Program
/* ********************************************************** */
/* Name: UserWrittenStaging                                   */
/* ---------------------------------------------------------- */
/* Description:                                               */
/* Reads the rawdata from the rawdata.txt file that is in     */
/* this format:                                               */
/* ddMONyyyy:hh:mm machineName metric1 metric2                */
/* ddMONyyyy:hh:mm machineName metric1 metric2                */
/* <etc.>                                                      */
/* ********************************************************** */
   libname staging 'c:\UserWritten\Staging';
   data staging.metric1 (keep = datetime machine 
                                metric1 lstpdate)
        staging.metric2 (keep = datetime machine
                                metric3 lstpdate); 
        attrib datetime label = "DateTime" format = datetime18.;
        attrib machine  label = "Machine"  length = $20;
        attrib metric1  label = "Metric1"  format = NLNUM16.;
        attrib metric3  label = "Metric2*100"  format = NLNUM16.;
        attrib lstpdate label = "LastProcessDate" format = NLDATM.;
        attrib metric2  format = NLNUM16.;
        lstpdate = datetime();   /* needed for aggregation */
        infile "c:\UserWritten\rawdata.txt" missover;
        input datetime datetime15. machine metric1 metric2;
        metric3 = metric2 * 100;
        output staging.metric1 
               staging.metric2;

   run;
 
Here is what the program might look after adding the helper macros.
Modified Program
/* ************************************************************** */
/* Name: UserWrittenStaging                                       */
/* -------------------------------------------------------------- */
/* Description:                                                   */
/* Reads the rawdata from the rawdata.txt file that is in         */
/* this format:                                                   */
/* ddMONyyyy:hh:mm machineName metric1 metric2                    */
/* ddMONyyyy:hh:mm machineName metric1 metric2                    */
/* <etc.>                                                         */
/* ************************************************************** */
/* ***PUT CODE INTO A MACRO IN ORDER TO USE %DO STATEMENTS***     */
  %macro readRawdata;                  
/*  libname staging 'c:\UserWritten\Staging';                     */
      data    
/* ***GENERATE TABLE NAMES WITH OPTIONS AND KEEP LISTS***         */
          %do i = 1 %to &numTargets; 
             &&target&i (%_ITMS_tableOptions(targetTableNum=&i,
                                             generateTableName=NO)
                         keep = %_ITMS_columnList(targetTableNum=&i))
          %end;  
/* ***DUPLICATE CHECKING OUTPUT TABLE***                         */
          %RMDUPDSN(SOURCE=USR)  
/*      staging.metric1 (keep = datetime machine                 */
/*                              metric1 lstpdate)                */
/*      staging.metric2 (keep = datetime machine                 */
/*                              metric2 metric3 lstpdate)        */
          ;
/* ***INITIALIZE DUPLICATE CHECKING***                           */
  %_ITMS_dupInit;        
/* ***GENERATE ATTRIB STATEMENTS FOR COLUMNS IN OUTPUT TABLES*** */
  %_ITMS_attrib();        
/*  attrib datetime label = "DateTime" format = datetime18.;     */
/*  attrib machine  label = "Machine"  length = $20;             */ 
/*  attrib metric1  label = "Metric1"  format = NLNUM16.;        */ 
/*  attrib metric3  label = "Metric2*100" format = NLNUM16.;     */ 
/*  attrib lstpdate label = "LastProcessDate" format = NLDATM.;  */
    attrib metric2  format = NLNUM16.;     
     lstpdate = datetime();   /* needed for aggregation */  
/*    infile "c:\UserWritten\rawdata.txt" missover;              */
/* ***USE &RAWDATA AND INCLUDE AN END-OF-FILE FLAG FOR***        */
/* ***DUPLICATE CHECKING***                                      */
     infile "&rawdata" end = _eof missover;
     input datetime datetime15. machine metric1 metric2;
/* ***RUN DUPLICATE CHECKING***                                  */
   %_ITMS_dupCheck;          
/* ***RUN FUTURE DATA CHECKING***                                */
   %RMSETFTR;        
/* ***CREATE COMPUTED COLUMNS***                                 */
   %_ITMS_computedColumns;  
/*    metric3 = metric2 * 100;                                   */
/* ***OUTPUT DATA TO TARGET TABLES***                            */
   output     
     %do i = 1 %to &numTargets 
         &&target&i
     %end;         
     ;    
/*    output staging.metric1                                     */
/*           staging.metric2;                                    */
   run;     

/* ***UPDATE DUPLICATE CHECKING FILES***                         */
   %RMDUPUPD;     
/* ***HANDLE FUTURE DATA***                                      */
   %_ITMS_futureData;  
%mend readRawdata; 
%readRawdata; 

Aggregating Data Created by User-Written Staging Transformations

After the data is staged, it is ready to be aggregated. The data in the staged tables that you created can be rolled up into aggregation tables. These tables are then available for analysis.
Creating and working with aggregations is described in detail in chapter 8, “Aggregating the Data.” For information, see About Aggregation Tables.
Use the following tips to keep your user-written adapter consistent with the supplied adapters.
  • In the IT data mart that you created for your user-written adapter, create a folder called Domain Categories. Within that folder, create one or more folders for your domain categories. (A domain category is a way of grouping different types of data such as System, Disk, or Network data.) Specify a name for these categories that makes sense for your application.
    Many of the supplied adapters have a domain category for each of the staged tables that were created. If you do not have domain categories, create a single folder that has a descriptive name.
  • Create an aggregation job for each domain category. Specify a job name using this format: <domain category> Aggregation.
  • In the supplied adapters, the data in each staged table is used as input to multiple aggregations. The data is typically summarized into different time periods. Most supplied adapters include the following time period aggregations.
    Supplied Time Period Aggregations
    Aggregation Table Prefix
    Date and Time Class Columns
    Day
    DAYDATE
    DayHour
    DAYDATE HOUR
    DayShift
    DAYDATE SHIFT
    DayShiftHour
    DAYDATE SHIFT HOUR
    Week
    WEEKDATE
    WeekHour
    WEEKDATE HOUR
    WeekShift
    WEEKDATE SHIFT
    WeekShiftHour
    WEEKDATE SHIFT HOUR
    Month
    MONTHDATE
    MonthHour
    MONTHDATE HOUR
    MonthShift
    MONTHDATE SHIFT
    MonthShiftHour
    MONTHDATE SHIFT HOUR
    In addition, many adapters supply the following aggregations:
    Other Supplied Aggregations
    Aggregation Table Prefix
    Description
    Detail
    This data is not summarized. It is left at the same level as the staging data.
    KeyMetrics
    Generally, this data is summarized into the DAYDATE and HOUR level. It also includes a limited number of metrics. This table usually keeps data for a longer period than the other aggregations, so that it can be used in trending and forecasting.
    Tip
    For the best results, use these standard prefixes on your aggregation tables. The tables will then conform to the standard that is used for the supplied adapters.
  • Consider using rank columns if needed.
    Rank columns can be used to create “Top N” reports and ranked listings. They also provide a way to filter the data. With rank columns, you can filter the data so that only data from a certain time period (relative to the data that has been collected) is processed in the reports. For example, if you want the data for yesterday, you can select all values that have a value of 1 for the rank variable called DayDateRank01. Some standard data and time rank columns are listed in the following table.
    Standard Data and Time Rank Columns
    Summarization Level of Aggregation Tables
    Staging Column
    Rank Column
    Rank Column Label
    For all aggregation tables that are summarized at the Day level and more granular
    DayDate
    DayDateRank01
    DayDateDescRank
    For all aggregation tables that are summarized at the Week level and more granular
    WeekDate
    WeekDateRank01
    WeekDateDescRank
    For all aggregation tables that are summarized at the Month level and more granular
    MonthDate
    MonthDateRank01
    MonthDateDescRank
  • Consider using join columns if needed.
    Join columns are usually rank columns that are copied from another aggregation table. These columns can be used to filter the data for Top 10 reports. For example, a rank column in the DayHour aggregation table is the ranking of that column for each day and hour.
    You might want to filter the data by the top 10 for the day, even if you are showing the data by hour. If so, you need to create a join column in the DayHour table in order to get the ranking column from the Day table. By default, the supplied aggregations have join columns for all the non-date and time ranking columns. However, in the user-written adapter, you can create only the ones that you will need.
    The naming convention for join columns is the following: X_rankColumn, where rankColumn is the name of the rank column from the original table and X is one of the following values:
    • D if the rank column is from the Day table
    • W if the rank column is from the Week table
    • M if the rank column is from the Month table

Creating Information Maps for User-Written Adapters

The supplied adapters contain jobs that, when run, create information maps for each of the aggregation tables that were generated. You can also create information map jobs for your user-written aggregations. An information map enables you to access your data with SAS Web Report Studio, SAS Enterprise Guide, and other SAS reporting tools.
Note: In SAS Enterprise Guide, you can also access the data directly from the SAS data tables.
Creating and working with information maps is described in detail in chapter 10, “Information Maps.” For more information, see About Information Maps.
The following tips can be used to keep your user-written adapter consistent with the supplied adapters.
  • Create an information map job for each domain category. Specify a name for the job such as <domain category>Information Map.
  • Each job should have an Information Map transformation for each aggregation in the domain category. The following diagram shows the resulting Information Map transformations for the job.
    Information Map Transformations
    Information Map Transformations
  • Use the name of the aggregation table as the name of your information map.
  • Information map filters are helpful when using information maps for reporting. Some standard filters that are used in the supplied adapters are shown in the following tables. The filters are listed by the columns on which the aggregation table is summarized: DayDate, WeekDate, and MonthDate.
    Standard Filters Used for Supplied Adapters Based on the DayDate Column
    Filter Name
    Filter Expression and Description
    LastDay
    Expression: <<aggTableNameDAYDATERANK01>> = 1
    Description: DayDateDescRank 1
    Last3Days
    Expression: <<aggTableNameDAYDATERANK01>> between 1 and 3
    Description: DayDateDescRank 1 – 3
    RollingWeek
    Expression: <<aggTableNameDAYDATERANK01>> between 1 and 7
    Description: DayDateDescRank 1 - 7
    RollingMonth
    Expression: <<aggTableNameDAYDATERANK01>> between 1 and 31
    Description: DayDateDescRank 1 - 31
    Last14Days
    Expression: <<aggTableNameDAYDATERANK01>> between 1 and 14
    Description: DayDateDescRank 1 - 14
    Last90Days
    Expression: <<aggTableNameDAYDATERANK01>> between 1 and 90
    Description: DayDateDescRank 1 - 90
    Standard Filters Used for Supplied Adapters Based on the WeekDate Column
    Filter Name
    Filter Expression and Description
    ThisWeek
    Expression: <<aggTablenameWEEKDATERANK01>> = 1
    Description: WeekDateDescRank 1
    LastWeek
    Expression: <<aggTablenameWEEKDATERANK01>> = 2
    Description: WeekDateDescRank 2
    Last2Weeks
    Expression: <<aggTablenameWEEKDATERANK01>> between 2 and 3
    Description:WeekDateDescRank 2 - 3
    Last4Weeks
    Expression: <<aggTablenameWEEKDATERANK01>> between 2 and 5
    Description: WeekDateDescRank 2 - 5
    Last12Weeks
    Expression: <<aggTablenameWEEKDATERANK01>> between 2 and 13
    Description: WeekDateDescRank 2 - 13
    Standard Filters Used for Supplied Adapters Based on the MonthDate Column
    Filter Name
    Filter Expression and Description
    ThisMonth
    Expression: <<aggTableName.MONTHDATERANK01>> = 1
    Description: MonthDateDescRank 1
    LastMonth
    Expression: <<aggTableName.MONTHDATERANK01>> = 2
    Description: MonthDateDescRank 2
    Last2Months
    Expression: <<aggTableName.MONTHDATERANK01>> = 2 - 3
    Description: MonthDateDescRank 2 - 3
    Last4Months
    Expression: <<aggTableName.MONTHDATERANK01>> = 2 - 4
    Description: MonthDateDescRank 2 - 4
    Last13Months
    Expression: <<aggTableName.MONTHDATERANK01>> = 2 - 14
    Description: MonthDateDescRank 2 - 14
    You can also make filters for the other ranked and joined columns in your data. For the supplied adapters, there are filters for all the rank and join columns in the aggregation. These filters enable you to create top-ten reports. To generate this type of report, specify “between 1 and 10” in the filter expression.

Reporting on Data Created with User-Written Staging Transformations

To create reports that can be displayed in the Gallery Manager, you must use SAS Enterprise Guide. For information about creating and using reports within the SAS IT Resource Management 3.2 framework, see the SAS IT Resource Management 3.2: Reporting Guide. This document can be accessed from the http://support.sas.com/itrm Web site.