IT Service Vision 2.5 Collector Updates


Contents:

Enhanced UNIX Accounting Support

Updates to HP OpenView Performance Agent (formerly HP MeasureWare)

C2RATE Interpretation Type Update

NTSMF Dictionary Updates

Creating and Installing a Collector Package

Weblog enhancements

SMF Data Processing on UNIX and Windows

Updated SAP R/3 Collector documentation

IT Service Vision 2.4 Updates

IT Service Vision 2.3 Updates

IT Service Vision 2.2.1 Updates

IT Service Vision 2.2 Updates


Enhanced UNIX Accounting Support

IMPORTANT PLEASE READ:The Enhanced UNIX Accounting Support was written primarily for use as a foundation for UNIX accounting. The use of this data for performance analysis or reporting needs a careful understanding of how the data is summarized and stored in the various levels of the PDB. See Discussion of Summarization for more information.

Introduction

The original UNIX accounting support (ACCTON) implemented by IT Service Vision had several restrictions that have been removed by this new support. The new Enhanced UNIX Accounting support (ACCUNX) includes the following features:

Tested/Supported UNIX Accounting Formats

Although the metrics recorded by UNIX Accounting are similar across all flavors UNIX, the layout of the binary file to which the data is written can vary across UNIX platforms and across releases of the same platform. For this reason, the format of the UNIX Accounting file is determined using the Operating System/Operating System Release pair obtained by the uname command.

The following table lists the Operating System/Operating System release pair for which the UNIX Accounting binary files have been tested and are supported by ITSV. If your OS/OS Release pair is not listed here then you should refer to the section Adding Support for OS / OS Release pairs not listed

Operating System Operating System Release
HP-UX B.10.20
HP-UX B.11.00
SunOS 5.7
OSF1 V4.0
AIX 3

How To Process UNIX Accounting data

This section is divided into three sections, Overview, Quick Reference and finally a Details section. I recommend reading the Overview and then the Quick Reference sections as these will direct you to the appropriate parts of the Details section.

Overview

The approach you use to process your UNIX accounting data will depend on several factors, such as, data volumes, number of data files from unique machines, number of unique passwd and group files.

Passwd and Group File Formats

Your first decision will be regarding how to build the passwd and group formats that the IT Service Vision UNIX Accounting processing uses to map user and group numbers to user and group names.

The simplest approach is to include the passwd and group information with each data file which can be done using the itsvacct shell script that is provided. This is the approach to use when you have either a small number of files to process or each machine has its own unique passwd and group file. When the data file is processed the passwd and group formats are build dynamically. The drawbacks to this approach are that processing takes longer due to having to build the formats and you have to transfer larger data files.

The second approach is to pre-build the formats on a per DOMAIN basis. For example, if you have two domains that each have unique passwd and group files that are common to all the machines in their respective domain, we can use the itsvfmt shell script and the %CPACCFMT macro to pre-build the format information. Once built, %CPPROCES will detect which domain a UNIX accounting file comes from and will apply the appropriate format. The advantages of this approach are that the format building is not done as part of the processing and the passwd and group data is not included in each of the data files. The drawback is that the user will have to ensure that the formats are kept up to date.

The final approach is really only an option if the machine running the ITSV Server also has the passwd and group files stored locally, and it is these passwd and group files that should be used for all the files being processed. If no passwd or group information is pre-built and the passwd and group data is not included in the data file, then %CPPROCES looks in the default location locally for the information.

You should ensure that the OS / OS Release value pair is supported. To do this you should use the %CPACCUTL macro. If your OS / OS Release pair are not listed as supported they will have to be added. See "Adding Support for OS / OS Release pairs not listed" for further information or contact SAS Technical Support with the information listed in that section.

Preparing Your UNIX Accounting Binary file

Now that you have decided how the passwd and group information is going to be handled, you will have to run the itsvacct shell script in order to prepare the UNIX accounting data for processing. This shell script should be run on the machine on which the UNIX accounting file was created as it uses the uname system command to create a header that contains the OS id, OS Release information, Domain name (if necessary). Also, the passwd and group information will be added if necessary.

Once this file is created for each UNIX accounting file, they should all be placed into a common directory (PDS or sequential file on MVS, or UNIX System Service is supported) which will be pointed to in the %CPPROCES macro.

Processing UNIX Accounting Data

By default, running %CPPROCES will process all the files in the directory referenced by the RAWDATA= parameter. If any file is determined to be invalid (e.g. it does not have valid header records) it will be skipped, a message put in the SAS log and the remaining files will be processed. The data will be summarized at the hour level (this can be altered) prior to be placed into the DETAIL level of the PDB and processing will use SAS data set views internally which reduce IO and space requirements (it is also possible to NOT use views which can have advantages in terms of handling certain errors).

Quick Reference To UNIX Accounting Support

  1. (optional) Run itsvfmt shell script.
  2. (Required) Run itsvacct script.
  3. (optional) Determine if you OS / OS Release pairs are supported.
  4. (optional) Binary FTP files to processing machine.
  5. (optional) Pre-build passwd and group formats.
  6. (Required) Run %CPPROCES.

Detailed Reference of UNIX Accounting Support

Passwd and Group File Formats

The original UNIX accounting support only allowed one data file to be processed at a time which meant associating it with its appropriate passwd and group files relatively straight forward.

The new UNIX accounting support allows you to process multiple data files some of which could use the same passwd and group files, others that may use different ones. The challenge here is to provide the necessary functionality to allow the passwd/group file to accounting file mapping to occur automatically.

The approach you take will depend on the characteristics of your environment. Your options are:

Include passwd and group information with data file

If you supply the itsvacct shell script with the UNIX accounting binary file and the location of the passwd and group files, then all this information is packaged into one file. This file should then be transferred (in Binary mode for ftp) to a common directory (PDS or sequential file for MVS) for processing.

MVS ONLY:USS (UNIX System Services) is supported for this collector and the files should be treated the same as a UNIX file. If you are processing a single file, then you can ftp (binary mode) the data to MVS into a sequential file, or if you are processing multiple files place each file in a unique member within a PDS. The following tables detail the data set attributes required:

Sequential File
DSORG PS
RECFM F
LRECL 32760
BLKSIZE 32760
PDS
DSORG PO
RECFM F
LRECL 32760
BLKSIZE 32760

If the passwd and group data is contained in the file then %CPPROCES will use this information to dynamically construct the formats and use them for processing this file only.

The advantages of this approach are that each data file is self contained and the passwd and group formats are refreshed for each process.

The disadvantages of this approach are that extra processing time is required to build the formats, and the raw data files will be larger due to the format information being passed. If the same passwd and group files are used across several machines then that information is duplicated and is a waste of resources (both time and space).

Pre-build passwd and group formats

If you have a large number of machines that use common passwd and group files then this approach may be the most efficient. The only drawbacks are that you will have to refresh the formats to ensure that they are up to date.

This approach assumes that a passwd and group file are common for all UNIX accounting files created on machines in that domain. When a UNIX accounting data file is processed the domain value is obtained from the header record and a format is applied. The result of this format is a value that represents the suffix for the passwd and group formats that were pre-built for this domain. If these formats exist they are used in processing.

If you have a few machines in that domain that do NOT use the same passwd and group files then make sure that the passwd and group information for those machines is included in the UNIX accounting file as this will override the pre-built format.

To pre-build the format use the itsvfmt shell script. Internally the shell script uses the domainname command to set the domain value in the header. This value can be overridden using the -d switch if the domainname command does not return a valid value.

The itsvfmt shell script will output an ASCII file to standard out and this file should be transferred (ASCII mode for ftp) to the IT Service Vision Server machine. Start up ITSV with the appropriate UNIX accounting PDB active and use the %CPACCFMT macro to store the formats in an appropriate location.

The passwd and group formats for any UNIX accounting data file coming from this domain will now be ready for use in the %CPPROCES macro. The only maintenance that you will be required to do is to ensure to refresh these formats if the passwd and/or group files are updated.

Default to passwd and Group Files on local machine

Finally, if you do not pre-build the formats and you do not include the passwd and group information in the data file, %CPPROCES will use the local copies of the passwd and group files located in /etc/passwd and /etc/group respectively. If your local files do not reside in this location, then you can specify the locations of the files in the CPACCPWD and/or the CPACCGRP macro variables prior to running %CPPROCES.

Note:If processing on PC or MVS then there is no default location to look for the passwd and group files, therefore you will have to provide the location via the CPACCPWD and CPACCGRP macro variables.

Using a Combination of the above options

If a data file contains the passwd and group information, it will be used to build the formats for that file. If there is no passwd and group information in the data file then the domain value will be used to determine if the format has been pre-built. If it has, then it will be used. Finally, if both these methods fail, the local passwd and group files will be used to build the formats.

Bearing this in mind you can mix the methods by which formats are built.

Preparing the UNIX accounting data

Typically, the binary UNIX accounting file is located in /var/adm/pacct (pacct# - where # is a number). In order for the ITSV process to obtain the necessary information such as DOMAIN, MACHINE NAME etc, you must use the itsvacct shell script to prepare the file.

The itsvacct shell script places a header record on a copy of the pacct file and also, if necessary will include passwd and group information too.

The standard output of the itsvacct shell script should be redirected to a file and transferred to the platform where it is going to be processed (in binary mode if using ftp).

On UNIX and NT all the files should be placed in one directory, and %CPPROCES will process all the files in that directory. On MVS you should place each file in its own PDS member or, if there is only one file then it can be placed in a sequential file. The MVS data set attributes are as follows:

Sequential File
DSORG PS
RECFM F
LRECL 32760
BLKSIZE 32760
PDS
DSORG PO
RECFM F
LRECL 32760
BLKSIZE 32760

Processing the UNIX accounting data

The following code is an example of the %CPPROCES macro invocation for processing UNIX accounting data.

       %let cpsumdur=15;                      (1)
       %let cpusevew=Y;                       (2)
       %let cpaccpwd=/etc/mypasswd;           (3)
       %let cpaccgrp=/etc/mygroup;            (4)
       %CPPROCES(,                            (5)
                 COLLECTR=ACCUNX,             (6)
                 RAWDATA=/dataLocation/,      (7)
                 TOOLNM=SASDS,                (8)
                 EXITSRC='ADMIN.EXITS',       (9)
                 DUPMODE=DISCARD,             (10)
                 _RC=retcode);                (11)
  1. CPSUMDUR macro variable (optional, defaults to 3600 seconds) - Typically, you will not need to specify this macro variable. It specifies at what interval the incoming data will be summarized. The default value is 3600 which means one hour intervals, other valid values are 1800,900,600,300 (in seconds) 30,15,10,5 (in minutes) and finally 0 and . (missing value). If CPSUMDUR is set to 0 then the summarization code is still used and any observations that have the same BY values are summarized into one observation. If CPSUMDUR is set to missing (i.e. %let cpsumdur=.;) then no summarization is performed at all. Setting CPSUMDUR to missing will increase the volume of data dramatically. Any other value other than those listed will cause summarization to default to 3600.
  2. CPUSEVEW macro variable (optional, defaults to Y) - Typically, you will not need to specify this macro variable. By default, it is set to Y(es) which means %CPPROCES will use SAS data set views, which makes processing much more efficient. If you set this value to N(o) then interim SAS data set will be created and although useful for debugging purposes, it is not recommended.
  3. CPACCPWD macro variable (optional, defaults to /etc/passwd) - If the passwd and group information is not supplied via the data file or through pre-built formats, %CPPROCES will use the local copy of /etc/passwd and /etc/group if possible. If your local passwd and group files are not in the expected location you can override this by specify the location with these macro variables.
  4. CPACCGRP macro variable (optional, defaults to /etc/group) - See CPACCPWD.
  5. %CPPROCES macro invocation - For more information on this macro please refer to the macro reference. In the example above the first comma is a place holder for the first parameter which is the list of tables to process. As I have left this blank, %CPPROCES will add the table with the matching Collector value if they are not already in the PDB or will only process those that are already in the PDB.
  6. COLLECTR= - must be set to ACCUNX.
  7. RAWDATA - If the data location is a directory (NT/UNIX) or a PDS (MVS) then all the files or PDS members in the directory/PDS will be processed. Alternatively, if you specify a single file then only that single file will be processed.
  8. TOOLNM - must be set to SASDS.
  9. EXITSRC (optional) - Please refer to the Special Features section later for more information on the UNIX accounting exit points.
  10. DUPMODE (optional) - Please refer to the ITSV documentation for more information on this parameter.
  11. _RC (optional) - Please refer to the ITSV documentation for more information on this parameter.

Once processing has complete you should review the SAS log. By design, if an invalid file is encountered, then it is skipped and processing will continue. You may want to fix this file and re-process just that file at a later date.

Special Features

UNIX Accounting User Exits

IT Service Vision already has 'Process Exits' that can be used to enhance the processing for all data sources that use the %CPPROCES macro. The UNIX Accounting User Exits have been added to allow access to the UNIX accounting specific code. One example that is given below is the use of exits to output a SAS data set of the data prior to it being summarized.

Warnings and Disclaimers

In any situation where you have the ability to affect and alter the intended flow and execution of code, there exists the possibility of error. As such, SAS Institute makes the following warnings and disclaimers with regard to the use of exits with IT Service Vision.

  1. The example code shown in these "exit" files, while accurate, is subject to change. Any changes to this code that affect user exits will be reported to you via the normal usage notes packaged with maintenance.

  2. There is simply no way that IT Service Vision can anticipate or be responsible for the processing that occurs within an exit. As such, exit code that you provide, if any, is simply included at the documented points. Any generation of reports during the actual execution of the exits is left up to you.

  3. Additionally, it is possible to put code in exits that causes the data that will be stored in the IT Service Vision PDB to be invalid. SAS Institute cannot be responsible for invalid PDB data that is caused by user-written exits.

The bottom line is this: use exits with great caution.

Location of UNIX Accounting Exit points

    %macro endian;
    %global _pre;
    %global _fmt;
    %let _pre=%str();
    %let _fmt=%str();
    %if &sysscp eq os %then
      %do;
         %let _pre=%str(put%();
         %let _fmt=%str(,$ascii2.%));
      %end;
    %mend;
    %endian;

    #1

    DATA
      WORK.A1 ( KEEP=
    ACCCOMM
    ACCETM
    ACCGRP
    ACCIO
    ACCKMIN
    ACCMEMA
    ACCRW
    ACCSTM
    ACCUSR
    ACCUTM
    DATETIME
    DOMAIN
    HOUR
    LSTPDATE
    MACHINE
    OBSCNT
    SHIFT
    RNDDATM
       #2 ) #3
      / VIEW=WORK.A1
    ;
       ATTRIB ACCBTM LENGTH=6 FORMAT=DATETIME21.2
              LABEL='Begin time' ;
       ATTRIB ACCCOMM LENGTH=$8
              LABEL='Cmd name' ;
       ATTRIB ACCETM LENGTH=6 FORMAT=TIME11.2
              LABEL='Elapsed time' ;
       ATTRIB ACCFLAG LENGTH=$1 FORMAT=$HEX2.
              LABEL='Acct flag' ;
       ATTRIB ACCGID LENGTH=6
              LABEL='GroupID' ;
       ATTRIB ACCGRP LENGTH=$16
              LABEL='Group' ;
       ATTRIB ACCIO LENGTH=6
              LABEL='Chars transferred' ;
       ATTRIB ACCKMIN LENGTH=6
              LABEL='Memory Kcore mins' ;
       ATTRIB ACCMEM LENGTH=6
              LABEL='Memory clicks' ;
       ATTRIB ACCMEMA LENGTH=6
              LABEL='Avg memory' ;
       ATTRIB ACCRW LENGTH=6
              LABEL='Blks read/written' ;
       ATTRIB ACCSTAT LENGTH=$1 FORMAT=$HEX2.
              LABEL='Status' ;
       ATTRIB ACCSTM LENGTH=6 FORMAT=TIME11.2
              LABEL='System time' ;
       ATTRIB ACCTTY LENGTH=6 FORMAT=HEX4.
              LABEL='Cntrl tty' ;
       ATTRIB ACCUID LENGTH=6
              LABEL='UserID' ;
       ATTRIB ACCUSR LENGTH=$16
              LABEL='User' ;
       ATTRIB ACCUTM LENGTH=6 FORMAT=TIME11.2
              LABEL='User time' ;
       ATTRIB DATETIME LENGTH=6 FORMAT=DATETIME21.2
              LABEL='Datetime' ;
       ATTRIB DOMAIN LENGTH=$16
              LABEL='Domain name' ;
       ATTRIB HOUR LENGTH=3
              LABEL='Hour' ;
       ATTRIB LSTPDATE LENGTH=8 FORMAT=DATETIME21.2
              LABEL='Last process date' ;
       ATTRIB MACHINE LENGTH=$32
              LABEL='Machine' ;
       ATTRIB OBSCNT LENGTH=8 FORMAT=BEST12.
              LABEL='Count of observations in Summary' ;
       ATTRIB SHIFT LENGTH=$1
              LABEL='Shift' ;
      ATTRIB RNDDATM LENGTH=6 FORMAT=DATETIME21.2
             LABEL='Datetime';
      RETAIN _DOABORT _EOFONLY 0;
      RETAIN DATETIME RNDDATM 0;
      RETAIN FLAG INDEX P 0 LOOP 1;
      RETAIN SEP 'ITSVHEADEND';
      LENGTH LINE $1;
      RETAIN MACHINE "node01";
      RETAIN DOMAIN "DOMAINXX";
      RETAIN GMTDEV -18000;

      #4

       INFILE a1 RECFM=N END=_LAST EOF=EOFLABEL;
       ACCMEMA=0; /* Initialize */
       DO WHILE (LOOP);
         P=P+1;
         INPUT @P LINE $ascii1.;
         IF FLAG OR LINE='I' THEN
           DO;
             IF LINE = 'I' THEN
               DO;
                 FLAG=1;
                 INDEX=1;
               END;
             ELSE
               DO;
                 INDEX=INDEX+1;
                 IF SUBSTR(SEP,INDEX,1) NE LINE THEN
                   DO;
                     FLAG=0;
                     INDEX=0;
                   END;
               END;
             IF INDEX = 11 THEN
               DO;
                 LOOP=0;
                 INPUT @P+2;
               END;
           END;
       END; /* DO WHILE */

input accflag $ascii2.
      accstat $ascii2.
      accuid s370fpib4.
      accgid s370fpib4.
      accprm s370fpib4.
      acctty s370fpib4.
      datetime s370fpib4.
      autm $ascii2.
      astm $ascii2.
      aetm $ascii2.
      amem $ascii2.
      aio  $ascii2.
      arw  $ascii2.
      acccomm  $ascii8.;

      #5

      IF ACCCOMM EQ '0000000000000000'X THEN          --*
        DO;                                             |
          PUT _ALL_;                                    |
          IF ACCUTM  EQ . AND                        MVS ONLY
             ACCSTM  EQ . AND                           |
             ACCETM  EQ . THEN STOP;                    |
          ELSE DELETE;                                  |
       END;                                           --*

       ACCSTAT= &_PRE.ACCSTAT&_FMT;
       ACCUSR = PUT(ACCUID,P&FMTSUFF..);
       IF ACCUSR = "UNKNOWN" THEN
          ACCUSR = LEFT(PUT(ACCUID,10.));
       ACCGRP = PUT(ACCGID,G&FMTSUFF..);
       IF ACCGRP = "UNKNOWN" THEN
          ACCGRP = LEFT(PUT(ACCGID,10.));
       DATETIME = DATETIME + &_DATCON + GMTDEV;
       ACCUTM = INPUT(&_PRE.AUTM&_FMT.,BITS13.3) * 8**INPUT(&_PRE.AUTM&_FMT.,BITS3.0);
       ACCMEMA=ACCMEMA + ACCUTM;
       ACCUTM = ACCUTM/&ctick;
       ACCSTM = (INPUT(&_PRE.ASTM&_FMT.,BITS13.3) * 8**INPUT(&_PRE.ASTM&_FMT.,BITS3.0));
       ACCMEMA=ACCMEMA + ACCSTM;
       ACCSTM = ACCSTM/&ctick;
       ACCETM = (INPUT(&_PRE.AETM&_FMT.,BITS13.3) * 8**INPUT(&_PRE.AETM&_FMT.,BITS3.0))/&ctick;
       ACCMEM = (INPUT(&_PRE.AMEM&_FMT.,BITS13.3) * 8**INPUT(&_PRE.AMEM&_FMT.,BITS3.0));
       ACCMEMA = ACCMEM*&psize/ MAX(ACCMEMA,1);
       ACCKMIN = (ACCMEM*&psize) / (60 * &ctick);
       ACCIO = (INPUT(&_PRE.AIO&_FMT.,BITS13.3) * 8**INPUT(&_PRE.AIO&_FMT.,BITS3.0));
       ACCRW = (INPUT(&_PRE.ARW&_FMT.,BITS13.3) * 8**INPUT(&_PRE.ARW&_FMT.,BITS3.0));
       ACCCOMM= SCAN(ACCCOMM,1,'00'X);

       intrvl = int((datetime - intnx('hour',datetime,0))/3600);
       rnddatm =  intnx('hour',datetime,0)+3600*intrvl;

       #6

       OUTPUT WORK.A1 #7
  ;
return;
EOFLABEL:

       #8

RUN;


       #9


        DATA COLLECT.ACCMSTR #10                       --*
                             / VIEW=COLLECT.ACCMSTR;     |
          SET                                            |
          WORK.A1 END=_LAST                              |
        ;                                              CPUSEVEW='Y' ONLY
                                                         |
        #11                                              |
        OUTPUT COLLECT.ACCMSTR #12 ;                     |
        RUN;                                           --*


        PROC APPEND BASE=COLLECT.ACCMSTR               --*
                    NEW=WORK.A1 FORCE;                 CPUSEVEW='N' ONLY
        RUN;                                           --*

        #13

Exit Point Descriptions

Exit Number in Code Example Exit Point Name Placement Purpose Frequency of Execution
#1 acct010 Directly before the SAS data step that processes the raw data file. Enable open code processing. Once per input file.
#2 acct020 Allows the KEEP list to be modified. At the end of the KEEP list before the closing parenthesis. Once per input file.
#3 acct030 Directly after the KEEP list closing parenthesis and before the closing semi colon. (Also before the / view= statement if views are being created. Allows the setting of data set options as well as including extra output data sets. Once per input file.
#4 acct040 Directly after the ATTRIB and RETAIN statements. Allows open data set code to be inserted. Once per input record.
#5 acct050 Directly after the INPUT statement. Allows the variables read in to be examined. Once per input record.
#6 acct060 Directly before OUTPUT statement. Allows open data set code to be inserted. Once per input record.
#7 acct070 Directly before the closing semicolon on the OUTPUT statement. Enable user OUTPUT statements. Once per file.
#8 acct080 Directly after the EOFLABEL label. Allows open data step code when end of file encountered. Once per input file.
#9 acct090 Directly after the RUN statement. Enable open code processing. Once per input file.
#10 acct100 Before the "/ view=" on the data statement. Allows the setting of data set options as well as including extra output data sets. Once per process (Only used if CPUSEVEW macro variable is set to "Y" (default).
#11 acct110 Directly after the SET statement. Allows open data set code to be inserted. Once per process. (Only used if CPUSEVEW macro variable is set to "Y" (default).
#12 acct120 Part of OUTPUT statement. Allows additional output data sets to be specified. Once per process.
#13 acct130 After COLLECT data set has been created. Enable open code processing. Once per process.

How to Use UNIX Accounting Exit Points

Please familiarize yourself with the information provided in "Shared Appendix 8: Exits for the Process Task -- General Information" as this information also applies to the UNIX Accounting exit points. The only differences are as follows:

Exit Point Example: Output a Detail SAS Data Set

The enhanced UNIX Accounting support automatically summarizes the input data in order to reduce the amount of data that is stored and the DETAIL level of the PDB. Sometimes it may be necessary to retain a copy of the actual data prior to it being summarized.

How this is done will depend on your goals and what value is associated with the CPUSEVEW macro variable. For this example, we are going to assume the default value of CPUSEVEW='Y' for this macro variable and we will therefore use exit point acct100.

In sasuser.exits.acct100.source I saved the following code,to place an extra output data set on the DATA statement where the view COLLECT.ACCMSTR is built:

  %macro acct100;
    work.alldata
  %mend;
  %acct100

Finally, the %CPPROCES macro invocation has to be updated to include the exit source location. For example:

      %CPPROCES(
         ,COLLECTR=ACCUNX
         ,RAWDATA=/accton/tmp/itsv.pacct
         ,EXITSRC=SASUSER.EXITS
         ,TOOLNM=SASDS
      );

Adding Support for OS / OS Release pairs not listed

UNIX accounting files can vary in structure across the different UNIX platforms and even between releases of the operating systems. One of the objectives of this enhanced UNIX accounting support was to enable new formats to be added easily by either the customer or SAS Institute.

This section discusses how ITSV identifies the requirements for processing each input file, and how to add support for a new format. If you do not wish to attempt to add support yourself, you should contact SAS Technical Support with the information listed in Information Requirements

.

How ITSV Determines the structure of the Binary UNIX Accounting File

Each UNIX Accounting file has to be processed by the itsvacct shell script prior to being sent to the server. The reason for this is that header information is added to the file which will enable the server to determine the format of the binary file. The following is an example of the header placed on a binary file:

ITSVACCTON,HP-UX,B.10.20,node01,domain01,-5:00,9000/785,100,4,,,,,,,,,,
Field 1 ITSVACCTON This is a keyword and is always set to ITSVACCTON.
Field 2 Operating System This field contains the operating system value as returned by the uname -s command.
Field 3 Operating System Release This field contains the operating system release value as returned by the UNIX command uname -r.
Field 4 Node name (Machine name) This field contains the node name value as returned by the UNIX command uname -n.
Field 5 Domain name This field contains the domain name value as returned by the UNIX command domainname.
Field 6 GMT Offset This field contains the GMT offset based on information from the UNIX command date.
Field 7 Hardware This field contains the hardware value as returned by the UNIX command uname -m.
Field 8 Clock Ticks This field contains a value that represents the number of clock ticks per second for this machine.
Field 9 Page Size (in Kbytes) This field contains a value that represents the page size for this machine.
Field 10 - 19 Optional Fields The user can use these fields to pass additional information into IT Service Vision.

Where possible, the above header record is constructed automatically in the itsvacct shell script, however this script is supplied with switches to allow the values to be overridden if necessary.

Each unique OS/OS Release pair has a format applied to it to determine the file structure of the input file.

PGMLIB.ACCUNX.OSFMT.SOURCE contains information to OS/OS Release pairs that have been tested. For example:

HP-UXB.10.20                             ACC001
HP-UXB.11.00                             ACC002
SunOS5.7                                 ACC003
OSF1V4.0                                 ACC004

An OS/OS Release pair of HP-UXB.10.20 will result in the input statements contained in PGMLIB.ACCUNX.CPACC001.SOURCE to be used to read the data.

What happens when a Binary UNIX Accounting file's format cannot be determined.

If during processing an unsupported OS/OS Release pair is detected, the following WARNING message will be issued in the SAS log and this file will not be processed, however, processing will continue for any other valid files.

WARNING: Format unknown for OS, OSREL pair in file: accton/tmp/itsv.pacct3/

In order to process this files data we need to add support for this OS/OS Release pair and also validate that the data is processed correctly.

Adding Support for a New Binary UNIX Accounting File Format

Let us assume that we have a UNIX Accounting file that has been rejected by ITSV due to the OS/OS Release pair not being recognized. You have two options, 1) is to add support yourself using the procedures described below, or 2) contact SAS Institute Technical Support with the following information and we will add in and validate the support and provide you with the necessary updates.

Information Requirements

The following information is required to add and validate support for a new OS/OS Release pair.

  1. A copy of your acct.h file, normally found in /usr/include/sys/acct.h
  2. A copy of the original UNIX Accounting binary file (/var/adm/pacct). A small sample is sufficient for data verification purposes.
  3. The output of the acctcom -fiktrmh file command where file is the file mentioned in point 2. The output of this command should be redirected to a file as it will be used later to validate the values that are processed into the PDB.
  4. The output of the uname -srnm command from that machine that created the UNIX Accounting file. The results of this command will assist in building the format information for this OS/OS Release pair.
  5. The number of clock ticks per second for the machine on which the pacct file was created.
  6. The page size for the machine on which the pacct file was created.
Create Input Statements For Data

The supplied CPACCnnn source entries are stored in PGMLIB.ACCUNX.CPACCnnn.SOURCE and these should NOT be modified, as any modifications would be overwritten when installing the next IT Service Vision release. The nnn for supplied CPACCnnn entries can be from 001 to 899, however non-supplied entries are limited to CPACC9nn to avoid name space collisions.

The UNIX Accounting process will automatically search the following SAS catalogs for new/modified CPACC9nn entries which if found will take priority over the PGMLIB versions. The catalog search/preference order is ADMIN.ACCUNX, SITELIB.ACCUNX, PGMLIB.ACCUNX.

Copy a supplied CPACCnnn source entry from PGMLIB to the destination catalog listed previously. You should note that the ADMIN.ACCUNX catalog entries will be associated with a single PDB and the SITELIB.ACCUNX entries will be associated with any PDB using that SITELIB.

The following SAS code should be submitted from the PROGRAM EDITOR of an ITSV SAS session that has the appropriate UNIX Accounting PDB active. You will need to modify the COPY OUT= statement and the CPACCnnn part of the SELECT statement as necessary.

    PROC CATALOG CAT=PGMLIB.ACCUNX;
      COPY OUT=ADMIN.ACCUNX;
      SELECT CPACCnnn.SOURCE;
    QUIT;

    PROC CATALOG CAT=ADMIN.ACCUNX;
      CHANGE CPACCnnn.SOURCE=CPACC9nn.SOURCE;
    QUIT;

On opening your libref.ACCUNX.CPACC9nnn.SOURCE you will see something similar to the following:

input accflag $ascii2.
      accstat $ascii2.
      accuid s370fpib4.
      accgid s370fpib4.
      accprm s370fpib4.
      acctty s370fpib4.
      datetime s370fpib4.
      autm $ascii2.
      astm $ascii2.
      aetm $ascii2.
      amem $ascii2.
      aio  $ascii2.
      arw  $ascii2.
      acccomm  $ascii8.;

The code contained in the CPACC9nn source entries are the actual input statements used to process the UNIX Accounting files. The informats that are used have been chosen to ensure that the data is processed correctly on UNIX, NT and MVS systems.

Typically all that is necessary is for you to modify the order of the variables or maybe the width of the formats, for example, $ascii2. may become $ascii4. The order and width information can be obtained from the acct.h file although it may be necessary to examine the binary pacct file directly.

DO NOT

Create/Modify an OSFMT.SOURCE entry

IT Service Vision needs a way to map the new libref.ACCUNX.CPACC9nn.SOURCE entry to the OS/OS Release values so that it can use the new input statements to process the data.

For this reason, you should create your own libref.ACCUNX.OSFMT.SOURCE entry that just contains the information for your OS/OS Release pair. The PGMLIB.ACCUNX.OSFMT.SOURCE entry should not be modified as any updates would be lost when IT Service Vision was updated.

The %CPACCUTL macro is used to LIST, ADD and DELETE these OSFMT.SOURCE entries, it is fully documented in the Macro Reference guide. The LIST command will list the contents of the libref.ACCUNX.OSFMT.SOURCE entries, the ADD and DELETE will work against the ADMIN and SITELIB librefs. If the OS/OS Release pair being added needs to be accessed across several PDB's, then it should be added to SITELIB, otherwise if it is specific to one PDB it can be added to that PDB's ADMIN library.

Using the list function of the %CPACCUTL macro you are able to list the contents of existing entries in all three libraries (ADMIN, SITELIB and PGMLIB).

The %CPACCUTL macro interface has been provided to ensure that the OSFMT.SOURCE entries are created correctly. The following code gives and example of how this macro works:

   %CPACCUTL(LIBREF=ADMIN
            ,FUNCTION=ADD
            ,OS=HP-UX
            ,OSREL=B.10.20
            ,INSRC=1
            ,_RC=&RETCODE);

    %put Return code is &retcode;

The result of this macro would be the following line added to ADMIN.ACCUNX.OSFMT.SOURCE.

HP-UXB.10.20                             ACC901
Validating Your Updates and Data

Now, when you pass your data file to the IT Service Vision %CPPROCES macro, it extracts the OS/OS Release information that was prefixed on the file and uses that information to locate the correct input statements (e.g. ADMIN.ACCUNX.CPACC901.SOURCE). The final requirement is to validate the values being stored in the PDB.

The default processing is for IT Service Vision to pre-summarize the UNIX Accounting data to the hour interval prior to being added to DETAIL. In order to be able compare the values in the PDB to the output created by the acctcom command we have to switch off this pre-summarization.

Create a PDB which is going to be used for validation purposes only and add the ACCTON table (the others are not required for validation). Process the UNIX accounting file as usual except set the CPSUMDUR macro variable to a missing value (i.e. %let cpsumdur=.;) before the %CPPROCES macro.

    %let cpsumdur=.;
    %cpproces(......);

The result is to effectively switch off the pre-summarization. The data in DETAIL.ACCTON can now be compared with the output of the acctcom command although it is in a different sort order. To make it slightly easier, you can use COLLECT.ACCTON for comparison as it will be in the same sort order as the incoming data.

Once validated, this PDB can be deleted and the UNIX Accounting file can be included as part of your normal batch run.

Using the Optional Parameters on the itsvacct Script to Add Information to the PDB

There are 10 optional parameters on the itsvacct shell script that can be used to populate data fields in the PDB. These fields make it easier to include machine or domain information which may be useful as accounting data that is not available in the existing script.

You can use the optional fields either by specifying the value you want to pass to the PDB via a switch or you can modify the script to use a system command that provides the information. Next you will have to mark as KEPT=YES in the appropriate table the ACCOPTn variable that you are using (where 'n' is a number). In addition to this you will then need to make this variable an ID variable. All the ACCOPTn variables are treated as character by the PDB, if you want to convert them to numeric you will have to create either formula or derived variables that perform the conversion.

How to Use the Optional Parameters

For this example we are going to assume that the user wishes to add a 'processor type' field to their ACCTON table only (although the other ACC* tables will pick up the value if set up correctly). This value can be obtained by some operating systems by using the 'uname -p' command.

Modifying the itsvacct Shell Script

You should only modify the shell script according to the documentation, any other modifications could cause a failure or a file with an invalid format.

Below is an extract of part of the itsvacct shell script and the comments describe how to update the DEFOPT0 value to hold the processor value.

# User may modify the following variables which represent the defaults
# to be supplied to the optional values -o0-9.
#
# e.g.   DEFOPT0="`uname -p`" stores the output of this command in
#                             the $DEFOPT0 variable.

DEFOPT0=""
DEFOPT1=""
DEFOPT2=""
DEFOPT3=""
DEFOPT4=""
DEFOPT5=""
DEFOPT6=""
DEFOPT7=""
DEFOPT8=""
DEFOPT9=""

The DEFOPT0 line would look like:

DEFOPT0="`uname -p`"

This change means that the result of this command will be stored in optional field 0 whenever this script is run.

Required Updates to the ACCTON Table

The ACCOPTn variables by default are marked as KEPT=NO and treated as character variables. If you want to convert them to numeric values you will have to use formula or derived variables.

  1. Mark the ACCOPT0 variable in the ACCTON table as KEPT=YES, either using the GUI or the %CPDDUTL macro.
  2. Make the ACCOPT0 variable an ID variable either using the GUI or the %CPDDUTL macro.

Once the above changes have been made the result of the 'uname -p' command will be stored in the ACCOPT0 variable of the ACCTON table. If the ACCOPT0 variable was marked as KEPT=YES and made an ID variable in the other UNIX Accounting tables it would be populated in them too.

Discussion of Summarization

The ACCTON data that ITSV processes is typically EVENT type data, that is, a process started, consumed some resources over time and then ended completing the event. The supplied ACCUNX tables (ACCCOM,ACCUSR,ACCGRP,ACCTON) are all defined as EVENT tables.

Due to the potential large volumes of ACCTON data, by default, we summarize the data by HOUR prior to storing it in the DETAIL level of the PDB, keeping sums of the numeric variables. For example if five 'ls' command events started and completed within the same hour, then one observation would be stored at DETAIL, with the value of each numeric variable summed and an OBSCNT variable that would contain the value 5 for the number of events summarized into that one observation. The INTRVL variable indicates the summary duration (by default 3600 seconds) at which the records are summarized, yet this does not make these INTERVAL type tables.

Interval tables in ITSV require a DURATION variable as many of the numeric variables will be weighted by the duration when summarizing them into the reduction levels. To weight any numeric variables in our ACCUNX tables by DURATION would be invalid, instead they are weighted by OBSCNT (or the ELASPED TIME of the event could be used). Weighting by OBSCNT will result in an mean value per event, whereas weighting by elapsed time will result in a rate.

Summed values for the variables are typically sufficient for UNIX Accounting purposes. However, sites may wish to use this data for performance analysis where averages and rates may be more useful. To use the event information in these ACCUNX tables for performance analysis then the user should carefully examine and understand the data prior to using it.

You may find it more useful to output the detail data into a separate SAS data set and process separately for performance analysis purposes, in which case refer to Exit Point Example: Output a Detail SAS Data Set for further information.

Discussion of ACCKMIN Variable

The metric ACCKMIN (K core minutes) is marked as KEPT=NO by default in all the ACC* tables. Typically, this variable will not be useful for chargeback purposes, however if you decide to keep the data for this metric then you will need to consider how it is calculated on different platforms.

For example, AIX, DEC OSF and HP UX all use different formulas when calculating the K Core Minutes value and IT Service Vision maintains consistency by using the appropriate formula for the appropriate platform. The result being, that if you have ACCTON data from a mixture of platforms you should consult the documentation for those systems to understand how this value is calculated.


Updates to HP OpenView Performance Agent

IMPORTANT:HP MeasureWare is now part of the HP OpenView product umbrella, and will therefore be referred to as HP OpenView Performance Agent (HP-OVPA). During development of this release, HP temporarily used the product name HP VantagePoint Performance Agent (HP-VPPA). Collector values of HP-OVPA, HP-VPPA and HP-MWA are all now valid values for collector when processing this type of data.

This section details the IT Service Vision dictionary changes for the HP OpenView Performance Agent.

Automatic setting of TOOLNM parameter

In past releases the customer has been responsible for setting the TOOLNM parameter to ensure that the HP OVPA data is processed correctly. A toolnm value of MWA-UX indicated that the data being processed was from a UNIX platform and was to be treated as big endian, a value of MWA-NT indicated that the data was little endian.

The assumption that data from UNIX platforms is big endian is not always true, and if data came from a UNIX platform that was little endian, %CSPROCES or %CWPROCES would fail.

To resolve this problem, this release of IT Service Vision attempts to automatically detect the endian type of the incoming data. The result is that there is no longer any need to specify a TOOLNM value, although that option is still available. In addition, the MWA-UX and MWA-NT tool names have been superceded by MWA-BE and MWA-LE respectively as they more accurately represent the endian type of the incoming data (for compatibility purposes, MWA-UX and MWA-NT are treated as valid tool name values to ensure your existing programs run successfully).

For more information please refer to the 'IT Service Vision Help' online documentation for the %CSPROCES or %CWPROCES macro parameters.

If for any reason the IT Service Vision %CSPROCES or %CWPROCES macros are unable to detect the endian type of the incoming data then the user can set the TOOLNM parameter accordingly which will switch off the auto detection.

Dictionary Modifications

The following table details the changes to the supplied dictionary. If you are using these tables/variables then you may want to apply these changes to you dictionary (DICTLIB).

Variable Name Variable Label Dictionary Modification
PCSTRN
TRNBINS Tt Num Bins Interpretation changed from COUNT to INT.
TRNGE1 - 10 Ttbin Upper Range 1 - 10 Interpretation changed from COUNT to INT.
PCSDSK
DSKQLEN BYDSK_CURR_QUEUE_LENGtd Change Variable ID number from 222 to 4222.
PCSNET
NETC1MR BYNETIF_COLLISION_1_MIN_RATE Change Variable ID number from 489 to 6489.
NETE1MR BYNETIF_ERROR_1_MIN_RATE Change Variable ID number from 490 to 6490.
NETPRT BYNETIF_PACKET_RATE Change Variable ID number from 480 to 6480.
PCSLVL
LVLDIRA LV_DIRNAME_ALIAS Change Variable ID number from 30 to 5030.
LVLDEVA LV_DEVNAME_ALIAS Change Variable ID number from 31 to 5031.

New Metrics

The following table details new metrics added to the supplied dictionary. If you wish to use these metrics you should check you HP documentation to ensure that they are produced by your environment.

Variable Name Variable Label Variable Description
PCSAPP
PRMCPUC APP_PRM_CPUCAP_MODE The PRM CPU Cap Mode state on this system: 0 = PRM is not installed. 1 = PRM CPU cap mode not enabled. 2 = PRM CPU cap mode enabled.
PRMCPUE APP_PRM_CPU_ENTITLEMENT The PRM CPU entitlement for this PRM Group ID entry as defined in the PRM configuration file.
PRMDSKS APP_PRM_DISK_STATE APP_PRM_DISK_STATE
PRMLGMO APP_PRM_LOGGING_MODE The PRM logging mode will be 1 when PRM group data is being logged by the MeasureWare Agent in place of parm file defined application data.
PRMMEMA APP_PRM_MEM_AVAIL PRM available memory is the amount of physical memory less the amount of memory reserved for the kernel and system processes running in the PRM_SYS group 0.
PRMMEME APP_PRM_MEM_ENTITLEMENT The PRM MEM entitlement for this PRM Group ID entry as defined in the PRM configuration file.
PRMMEMS APP_PRM_MEM_STATE The PRM MEM state on this system: 0=PRM not installed, 1=reset, 2=configured/disabled, 3=enabled.
PRMMEMB APP_PRM_MEM_UPPERBOUND The PRM MEM upperbound for this PRM Group ID entry as defined in the PRM configuration file.
PRMMEMU APP_PRM_MEM_UTIL The percent of PRM memory used by processes (process private space plus a process portion of shared memory) within the PRM groups during the interval.
PRMCPUS APP_PRM_STATE The PRM MEM state on this system: 0=PRM not installed, 1=reset, 2=configured/disabled, 3=enabled.
PCSGLB
GBLMPIB GBL_MEM_PAGEIN_BYTE The number of KBs (or MBs if specified) of page ins during the interval.
GBLMPIR GBL_MEM_PAGEIN_BYTE_RATE The number of KBs per second of page ins during the interval.
GBLMPOB GBL_MEM_PAGEOUT_BYTE The number of KBs (or MBs if specified) of page outs during the interval.
GBLMPOR GBL_MEM_PAGEOUT_BYTE_RATE The number of KBs (or MBs if specified) per second of page outs during the interval.
GLBSWUU GBL_SWAP_SPACE_USED_UTIL The percentage of swap space currently in use (has memory belonging to processes paged or swapped out onto it).
PCSLVL
LVLKBRD LV_READ_BYTE_RATE The number of physical KBs per second read from this logical volume during the interval.
LVLKBWT LV_WRITE_BYTE_RATE The number of KBs per second written to this logical volume during the interval.

The following new metrics have been added with KEPT=NO as default. Please ensure that your platform actually collects data for these metrics before making them KEPT=YES.

Variable Name Variable Label Variable Description
PCSGLB
glbcu00 GBL CPU NO00 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of 00 was busy.
glbcu01 GBL CPU NO01 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (01) was busy.
glbcu02 GBL CPU NO02 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (02) was busy.
glbcu03 GBL CPU NO03 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (03) was busy.
glbcu04 GBL CPU NO04 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (04) was busy.
glbcu05 GBL CPU NO05 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (05) was busy.
glbcu06 GBL CPU NO06 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (06) was busy.
glbcu07 GBL CPU NO07 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (07) was busy.
glbcu08 GBL CPU NO08 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (08) was busy.
glbcu09 GBL CPU NO09 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (09) was busy.
glbcu10 GBL CPU NO10 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (10) was busy.
glbcu11 GBL CPU NO11 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (11) was busy.
glbcu12 GBL CPU NO12 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (12) was busy.
glbcu13 GBL CPU NO13 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (13) was busy.
glbcu14 GBL CPU NO14 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (14) was busy.
glbcu15 GBL CPU NO15 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (15) was busy.
glbcu16 GBL CPU NO16 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (16) was busy.
glbcu17 GBL CPU NO17 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (17) was busy.
glbcu18 GBL CPU NO18 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (18) was busy.
glbcu19 GBL CPU NO19 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (19) was busy.
glbcu20 GBL CPU NO20 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (20) was busy.
glbcu21 GBL CPU NO21 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (21) was busy.
glbcu22 GBL CPU NO22 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (22) was busy.
glbcu23 GBL CPU NO23 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (23) was busy.
glbcu24 GBL CPU NO24 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (24) was busy.
glbcu25 GBL CPU NO25 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (25) was busy.
glbcu26 GBL CPU NO26 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (26) was busy.
glbcu27 GBL CPU NO27 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (27) was busy.
glbcu28 GBL CPU NO28 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (28) was busy.
glbcu29 GBL CPU NO29 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (29) was busy.
glbcu30 GBL CPU NO30 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (30) was busy.
glbcu31 GBL CPU NO31 UTIL On a multi-processor system, this metric shows the percentage of time the processor with an index value of (31) was busy.
glblanc GBL LAN COLLISIONS The number of physical collisions that occurred on the network interface during the interval.
glblane GBL LAN ERRORS The number of physical errors that occurred on the network interface during the interval.
glblani GBL LAN PACKETS IN The number of successful physical packets received through the network interface during the interval.
glblano GBL LAN PACKETS OUT The number of successful physical packets sent through the network interface during the interval.
PCSDSK
dskcity GBL DISK CAPACITY The total disk capacity, in sectors, of all system disks and mounted logical volumes.
dskpace GBL DISK FILE SPACE The total disk space, in sectors, that was in use by permanent disk files at the time of the daily collection.
dskf100 GBL DISK FRAGMENT100K The total disk space comprising contiguous free spaces ranging from 10,000 to 99,999 sectors in size.
dskf10 GBL DISK FRAGMENT10K The total disk space comprising contiguous free spaces ranging from 1,000 to 9,999 sectors in size.
dsknt1k GBL DISK FRAGMENT1K The total disk space comprising contiguous free spaces ranging from 100 to 999 sectors in size.
dsker1k GBL DISK FRAGMENT OVER100K The total disk space comprising contiguous free spaces ranging from 100,000 or more sectors in size.
dskfree GBL DISK FREE The total amount of unused disk space in sectors.
dsknent GBL DISK FREE PERMANENT MPE iX The total amount of disk space, in sectors, that was free for use by permanent files at the time of the daily collection.
dskient GBL DISK FREE TRANSIENT The total amount of disk space, in sectors, that was free for use by transient objects at the time of the daily collection.
dsktual GBL DISK FREE VIRTUAL The total amount of disk space, in sectors, that was free for use by virtual memory at the time of the daily collection.
dsknm1 GBL DISK GROUP NAME 1 Indicates the account or user-defined group (can be from 1-20).
dsknm10 GBL DISK GROUP NAME 10 Indicates the account or user-defined group (can be from 1-20).
dsknm11 GBL DISK GROUP NAME 11 Indicates the account or user-defined group (can be from 1-20).
dsknm12 GBL DISK GROUP NAME 12 Indicates the account or user-defined group (can be from 1-20).
dsknm13 GBL DISK GROUP NAME 13 Indicates the account or user-defined group (can be from 1-20).
dsknm14 GBL DISK GROUP NAME 14 Indicates the account or user-defined group (can be from 1-20).
dsknm15 GBL DISK GROUP NAME 15 Indicates the account or user-defined group (can be from 1-20).
dskmnm6 GBL DISK GROUP NAME 16 Indicates the account or user-defined group (can be from 1-20).
dsknm17 GBL DISK GROUP NAME 17 Indicates the account or user-defined group (can be from 1-20).
dsknm18 GBL DISK GROUP NAME 18 Indicates the account or user-defined group (can be from 1-20).
dsknm19 GBL DISK GROUP NAME 19 Indicates the account or user-defined group (can be from 1-20).
dsknm2 GBL DISK GROUP NAME 2 Indicates the account or user-defined group (can be from 1-20).
dsknm20 GBL DISK GROUP NAME 20 Indicates the account or user-defined group (can be from 1-20).
dsknm3 GBL DISK GROUP NAME 3 Indicates the account or user-defined group (can be from 1-20).
dsknm4 GBL DISK GROUP NAME 4 Indicates the account or user-defined group (can be from 1-20).
dsknm5 GBL DISK GROUP NAME 5 Indicates the account or user-defined group (can be from 1-20).
dsknm6 GBL DISK GROUP NAME 6 Indicates the account or user-defined group (can be from 1-20).
dsknm7 GBL DISK GROUP NAME 7 Indicates the account or user-defined group (can be from 1-20).
dsknm8 GBL DISK GROUP NAME 8 Indicates the account or user-defined group (can be from 1-20).
dsknm9 GBL DISK GROUP NAME 9 Indicates the account or user-defined group (can be from 1-20).
dskgs1 GBL DISK GROUP SECTORS 1 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs10 GBL DISK GROUP SECTORS 10 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs11 GBL DISK GROUP SECTORS 11 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs12 GBL DISK GROUP SECTORS 12 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs13 GBL DISK GROUP SECTORS 13 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs14 GBL DISK GROUP SECTORS 14 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs15 GBL DISK GROUP SECTORS 15 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs16 GBL DISK GROUP SECTORS 16 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs17 GBL DISK GROUP SECTORS 17 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs18 GBL DISK GROUP SECTORS 18 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs19 GBL DISK GROUP SECTORS 19 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs2 GBL DISK GROUP SECTORS 2 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs20 GBL DISK GROUP SECTORS 20 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs3 GBL DISK GROUP SECTORS 3 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs4 GBL DISK GROUP SECTORS 4 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs5 GBL DISK GROUP SECTORS 5 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs6 GBL DISK GROUP SECTORS 6 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs7 GBL DISK GROUP SECTORS 7 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs8 GBL DISK GROUP SECTORS 8 The disc space (in sectors) being used by this user-defined group for permanent files.
dskgs9 GBL DISK GROUP SECTORS 9 The disc space (in sectors) being used by this user-defined group for permanent files.
dsklgfr GBL DISK LARGEST FREE The size of the largest contiguous unused disk space in sectors.
dskoups GBL DISK NUMBER OF GROUPS The number of logged disk groups in the DISK_SPACE record.
dskptnt GBL DISK PEAK TRANSIENT The maximum size of transient disk space during the day in sectors.
dskpvir GBL DISK PEAK VIRTUAL The maximum size of virtual disk space during the day in sectors.
dsktotf GBL DISK TOTAL FREE The amount of disk space that is available for permanent files.
dsktrnt GBL DISK TRANSIENT The disk space that was in use by transient objects (data stacks, heaps, etc.
dsktrcp GBL DISK TRANSIENT CAPACITY The total amount of disk space that is reserved for use by transient memory.
dskvirt GBL DISK VIRTUAL The disk space that was in use by virtual memory (data stacks and extra data segment swapping area.
dskvrcp GBL DISK VIRTUAL CAPACITY The total amount of disk space that is reserved for use by virtual memory.

Performance Update

Traditionally, when processing an extracted file that contains data for multiple data types such as Application, Global, Process etc. IT Service Vision creates a thread for each unique data type before processing the data. It used to do this even for the data types for which the respective PCS* table was not requested.

This update means that threads are created for the data type ONLY if that data types table is being processed. The should reduce elapsed time and amount of space required by process.


C2RATE Interpretation Type Update

IT Service Vision 2.5 now provides a way, through the use of macro variables to alter the thresholds used in converting counters into rates. Please read the following text for more information and/or go to the C2RATE documentation located at OnlineHelp ==> IT Service Vision Help ==> Work with PDBs, Tables, and Variables ==> Variable Interpretation Types.

Although IT Service Vision converts counters to rates automatically, there are three macro variables that the customer can now set that will affect how this is done. Counters continue increasing across all intervals until the counter reaches its maximum value and then will typically reset to zero and start again. Using the previous value and knowing the maximum limit for that particular counter it is possible to calculate the rate for that interval. The counter maximums that IT Service Vision can handle are 65536 (16 bit), 4,294,967,296 (32 bit) and 18,446,744,073,709,551,616 (64 bit). Although less of an issue with 32 bit and 64 bit counters, a problem that has to be dealt with is counters that reset more than once during an interval. Typically, there is very little that can be done to resolve this problem other than sampling at shorter intervals. In IT Service Vision we handle this by setting a threshold that the previous counter has to achieve for a valid rate to be calculated when the current counter value is less than the previous. If the previous counter value does not meet the threshold then a missing value is stored. The following example illustrates the situation for a 16 bit counter:

Counter value Calculated Rate (assuming 5 min interval)
20000 .
40000 66.67
62260 72.20
300 11.92
40000 132.33
62259 74.19
2000 .

The first obs will always be missing as there is no previous value with which to calculate the difference. The first time the counter resets, the value preceding the reset meets (or exceeds) the default threshold of 95% for 16 bit counters and is therefore calculated. The second time the counter resets, the value preceding the reset does not meet (or exceed) the default threshold of 95% for 16 bit counters and is therefore set to missing.

The default thresholds are:

When the C2RATE conversion is performed, messages will appear in you SAS log. The following message appears and informs you that the counter reset but was calculated as it met the threshold:

(CLEANUP) Obs 4 16 bit Overflow    Start 01JAN01:00:10:00  SVAL = 62260
          Corrected                End   01JAN2001:00:15:00.00  SVAL = 300

The following message also indicates the counter overflowed, however the previous value did not meet the threshold and the current value was set to missing:

(CLEANUP) Obs 7 Inconsistent      Start 01JAN01:00:25:00  SVAL = 62259
          Set to missing          End   01JAN2001:00:30:00.00  SVAL = 2000

If a large percentage of your data is producing these second messages in you SAS log then you may want to consider 1) reducing the interval at which you sample, although this will increase the volume of data you collect or 2) adjust the thresholds to be more flexible so that fewer missing values are calculated. The macro variables listed below set the thresholds accordingly. In this example by changing cp16pct to 94% the value above that was calculated above would now meet our criteria and be calculated.

  %let cp16pct=0.94;    /*16 bit threshold */
  %let cp32pct=0.96;    /*32 bit threshold */
  %let cp64pct=0.96;    /*64 bit threshold */

NOTE: Setting these thresholds too low could increase the risk of invalid values being calculated as a result of counters that reset more than once over and interval. Also, please refer to your Base SAS Software documentation for your host for information on 'Length and Precision of Variables' when encountering 64 bit counters.


NTSMF Dictionary Updates

This section details the IT Service Vision dictionary changes for NTSMF.

New NTSMF Tables

The following tables list the new NTSMF tables and variables that have been added to the IT Service Vision dictionary. 

Table Name Description
NTASRV0 Windows NT IAS Authentication Server
NTASRVC Windows NT ACS/RSVP Service
NTASRVR Windows NT IAS Accounting Server
NTCLNT0 Windows NT IAS Authentication Clients
NTCLNTS Windows NT IAS Accounting Clients
NTCRDNT Windows NT Distributed Transaction Coordinator
NTSRVCS Windows NT Terminal Services

 

Variable Name Description
Table: NTASRV0
ACACPT0 IAS Authentication Server: Access-Accepts / sec.
ACACPTS IAS Authentication Server: Access-Accepts
ACCHLNS IAS Authentication Server: Access-Challenges / sec.
ACHLNGS IAS Authentication Server: Access-Challenges
ACRJCT0 IAS Authentication Server: Access-Rejects / sec.
ACRJCTS IAS Authentication Server: Access-Rejects
ACRQST0 IAS Authentication Server: Access-Requests / sec.
ACRQSTS IAS Authentication Server: Access-Requests
ATHNTCT IAS Authentication Server: Bad Authenticators
BATHNTS IAS Authentication Server: Bad Authenticators / sec.
DARQSTS IAS Authentication Server: Duplicate Access-Requests
DPLARQS IAS Authentication Server: Duplicate Access-Requests / sec.
DRPCKTS IAS Authentication Server: Dropped Packets
DRPPCKS IAS Authentication Server: Dropped Packets / sec.
INRQSTS IAS Authentication Server: Invalid Requests
INVRQSS IAS Authentication Server: Invalid Requests / sec.
MLFRPCS IAS Authentication Server: Malformed Packets / sec.
MLPCKTS IAS Authentication Server: Malformed Packets
PCKRCVD IAS Authentication Server: Packets Received
PCKTRCS IAS Authentication Server: Packets Received / sec.
PCKTSNS IAS Authentication Server: Packets Sent / sec.
PCKTSNT IAS Authentication Server: Packets Sent
SRVRSTM IAS Authentication Server: Server Reset Time
SRVRUTM IAS Authentication Server: Server Up Time
UNKNWTP IAS Authentication Server: Unknown Type
UNKNWTS IAS Authentication Server: Unknown Type / sec.
Table: NTASRVC
FQSSNDS ACS/RSVP Service: Failed QoS sends
INTRFCS ACS/RSVP Service: Network Interfaces
NTFCTN0 ACS/RSVP Service: QoS notifications
NTFCTNS ACS/RSVP Service: Bytes in QoS notifications
NTSCKTS ACS/RSVP Service: Network sockets
QSRQSTS ACS/RSVP Service: Failed QoS requests
QSSCKTS ACS/RSVP Service: QoS sockets
SERCVRS ACS/RSVP Service: QoS-enabled receivers
SESNDRS ACS/RSVP Service: QoS-enabled senders
SVPSRVC ACS/RSVP Service: ACS/RSVP Service
SVPSSNS ACS/RSVP Service: RSVP sessions
TIMERS ACS/RSVP Service: Timers
Table: NTASRVR
ACNRSPS IAS Accounting Server: Accounting-Responses / sec.
ACNTRQS IAS Accounting Server: Accounting-Requests / sec.
ACRQSTS IAS Accounting Server: Accounting-Requests
ARSPNSS IAS Accounting Server: Accounting-Responses
ATHNTCT IAS Accounting Server: Bad Authenticators
BATHNTS IAS Accounting Server: Bad Authenticators / sec.
DARQSTS IAS Accounting Server: Duplicate Accounting-Requests
DPACNRS IAS Accounting Server: Duplicate Accounting-Requests / sec.
DRPCKTS IAS Accounting Server: Dropped Packets
DRPPCKS IAS Accounting Server: Dropped Packets / sec.
INRQSTS IAS Accounting Server: Invalid Requests
INVRQSS IAS Accounting Server: Invalid Requests / sec.
MLFRPCS IAS Accounting Server: Malformed Packets / sec.
MLPCKTS IAS Accounting Server: Malformed Packets
NRCRDSC IAS Accounting Server: No Record / sec.
NRECORD IAS Accounting Server: No Record
PCKRCVD IAS Accounting Server: Packets Received
PCKTRCS IAS Accounting Server: Packets Received / sec.
PCKTSNS IAS Accounting Server: Packets Sent / sec.
PCKTSNT IAS Accounting Server: Packets Sent
SRVRSTM IAS Accounting Server: Server Reset Time
SRVRUTM IAS Accounting Server: Server Up Time
UNKNWTP IAS Accounting Server: Unknown Type
UNKNWTS IAS Accounting Server: Unknown Type / sec.
Table: NTCLNT0
ACACPT0 IAS Authentication Clients: Access-Accepts / sec.
ACACPTS IAS Authentication Clients: Access-Accepts
ACCHLNS IAS Authentication Clients: Access-Challenges / sec.
ACHLNGS IAS Authentication Clients: Access-Challenges
ACRJCT0 IAS Authentication Clients: Access-Rejects / sec.
ACRJCTS IAS Authentication Clients: Access-Rejects
ACRQST0 IAS Authentication Clients: Access-Requests / sec.
ACRQSTS IAS Authentication Clients: Access-Requests
ATHNTCT IAS Authentication Clients: Bad Authenticators
BATHNTS IAS Authentication Clients: Bad Authenticators / sec.
DARQSTS IAS Authentication Clients: Duplicate Access-Requests
DPLARQS IAS Authentication Clients: Duplicate Access-Requests / sec.
DRPCKTS IAS Authentication Clients: Dropped Packets
DRPPCKS IAS Authentication Clients: Dropped Packets / sec.
MLFRPCS IAS Authentication Clients: Malformed Packets / sec.
MLPCKTS IAS Authentication Clients: Malformed Packets
PCKRCVD IAS Authentication Clients: Packets Received
PCKTRCS IAS Authentication Clients: Packets Received / sec.
PCKTSNS IAS Authentication Clients: Packets Sent / sec.
PCKTSNT IAS Authentication Clients: Packets Sent
UNKNWTP IAS Authentication Clients: Unknown Type
UNKNWTS IAS Authentication Clients: Unknown Type / sec.
Table: NTCLNTS
ACNRSPS IAS Accounting Clients: Accounting-Responses / sec.
ACNTRQS IAS Accounting Clients: Accounting-Requests / sec.
ACRQSTS IAS Accounting Clients: Accounting-Requests
ARSPNSS IAS Accounting Clients: Accounting-Responses
ATHNTCT IAS Accounting Clients: Bad Authenticators
BATHNTS IAS Accounting Clients: Bad Authenticators / sec.
DARQSTS IAS Accounting Clients: Duplicate Accounting-Requests
DPACNRS IAS Accounting Clients: Duplicate Accounting-Requests / sec.
DRPCKTS IAS Accounting Clients: Dropped Packets
DRPPCKS IAS Accounting Clients: Dropped Packets / sec.
MLFRPCS IAS Accounting Clients: Malformed Packets / sec.
MLPCKTS IAS Accounting Clients: Malformed Packets
NRCRDSC IAS Accounting Clients: No Record / sec.
NRECORD IAS Accounting Clients: No Record
PCKRCVD IAS Accounting Clients: Packets Received
PCKTRCS IAS Accounting Clients: Packets Received / sec.
PCKTSNS IAS Accounting Clients: Packets Sent / sec.
PCKTSNT IAS Accounting Clients: Packets Sent
UNKNWTP IAS Accounting Clients: Unknown Type
UNKNWTS IAS Accounting Clients: Unknown Type / sec.
Table: NTCRDNT
ATRMXMM Distributed Transaction Coordinator: Active Transactions Maximum
ATRNSSC Distributed Transaction Coordinator: Aborted Transactions/sec
CTRNSSC Distributed Transaction Coordinator: Committed Transactions/sec
RSTAVRG Distributed Transaction Coordinator: Response Time -- Average
RSTMNMM Distributed Transaction Coordinator: Response Time -- Minimum
RSTMXMM Distributed Transaction Coordinator: Response Time -- Maximum
TRNSCSC Distributed Transaction Coordinator: Transactions/sec
TRNSCT0 Distributed Transaction Coordinator: Active Transactions
TRNSCT1 Distributed Transaction Coordinator: Committed Transactions
TRNSCT2 Distributed Transaction Coordinator: Force Aborted Transactions
TRNSCT3 Distributed Transaction Coordinator: Force Committed Transactions
TRNSCT4 Distributed Transaction Coordinator: In Doubt Transactions
TRNSCTN Distributed Transaction Coordinator: Aborted Transactions
Table: NTSRVCS
ACTSSNS Terminal Services: Active Sessions
INCSSNS Terminal Services: Inactive Sessions
TTLSSNS Terminal Services: Total Sessions

General Updates to NTSMF Tables

Detailed Updates to NTSMF Tables

The following lists the NTSMF tables for which new variables have been added to the IT Service Vision dictionary.

Variable Name Description
Table: NTASPGS
BREXTNG Active Server Pages: Requests Executing
DBRQSTS Active Server Pages: Debugging Requests
DSCNCTD Active Server Pages: Requests Disconnected
EDSRNTM Active Server Pages: Errors During Script Runtime
ERRSSEC Active Server Pages: Errors/Sec
NATHRZD Active Server Pages: Requests Not Authorized
NTFCTNS Active Server Pages: Template Notifications
PRPRCSR Active Server Pages: Errors From ASP Preprocessor
RQSFTTL Active Server Pages: Requests Failed Total
RQSNFND Active Server Pages: Requests Not Found
RQSSCDD Active Server Pages: Requests Succeeded
RQSTSQD Active Server Pages: Requests Queued
SCECCHD Active Server Pages: Script Engines Cached
SCMPLRS Active Server Pages: Errors From Script Compilers
SSNDRTN Active Server Pages: Session Duration
SSNSTTL Active Server Pages: Sessions Total
TMPCCHD Active Server Pages: Templates Cached
TMPCHRT Active Server Pages: Template Cache Hit Rate
TRABRTD Active Server Pages: Transactions Aborted
TRNCMTD Active Server Pages: Transactions Committed
TRNSCSC Active Server Pages: Transactions/Sec
TRNSTTL Active Server Pages: Transactions Total
TRPNDNG Active Server Pages: Transactions Pending
Table: NTLGDSK
PCTIDTM LogicalDisk: % Idle Time
Table: NTPHDSK
PCTIDTM PhysicalDisk: % Idle Time
Table: NTSSN
HNDLCNT Terminal Services Session: Handle Count
ITRERRS Terminal Services Session: Input Transport Errors
OTAPERR Terminal Services Session: Output Async Parity Error
OTRERRS Terminal Services Session: Output Transport Errors
PRBCHR0 Terminal Services Session: Protocol Brush Cache Hit Ratio
PRBCHRT Terminal Services Session: Protocol Bitmap Cache Hit Ratio
PRBCHT0 Terminal Services Session: Protocol Brush Cache Hits
PRBCHTS Terminal Services Session: Protocol Bitmap Cache Hits
PRBCRD0 Terminal Services Session: Protocol Brush Cache Reads
PRBCRDS Terminal Services Session: Protocol Bitmap Cache Reads
PRGCHRT Terminal Services Session: Protocol Glyph Cache Hit Ratio
PRGCHTS Terminal Services Session: Protocol Glyph Cache Hits
PRGCRDS Terminal Services Session: Protocol Glyph Cache Reads
SSBCHRT Terminal Services Session: Protocol Save Screen Bitmap Cache Hit Ratio
SSBCHTS Terminal Services Session: Protocol Save Screen Bitmap Cache Hits
SSBCRDS Terminal Services Session: Protocol Save Screen Bitmap Cache Reads
TPRCHRT Terminal Services Session: Total Protocol Cache Hit Ratio
TPRCHTS Terminal Services Session: Total Protocol Cache Hits
TPRCRDS Terminal Services Session: Total Protocol Cache Reads
TTRERRS Terminal Services Session: Total Transport Errors
Table: NTSSN
HNDLCNT Terminal Services Session: Handle Count
ITRERRS Terminal Services Session: Input Transport Errors
OTAPERR Terminal Services Session: Output Async Parity Error
OTRERRS Terminal Services Session: Output Transport Errors
PRBCHR0 Terminal Services Session: Protocol Brush Cache Hit Ratio
PRBCHRT Terminal Services Session: Protocol Bitmap Cache Hit Ratio
PRBCHT0 Terminal Services Session: Protocol Brush Cache Hits
PRBCHTS Terminal Services Session: Protocol Bitmap Cache Hits
PRBCRD0 Terminal Services Session: Protocol Brush Cache Reads
PRBCRDS Terminal Services Session: Protocol Bitmap Cache Reads
PRGCHRT Terminal Services Session: Protocol Glyph Cache Hit Ratio
PRGCHTS Terminal Services Session: Protocol Glyph Cache Hits
PRGCRDS Terminal Services Session: Protocol Glyph Cache Reads
SSBCHRT Terminal Services Session: Protocol Save Screen Bitmap Cache Hit Ratio
SSBCHTS Terminal Services Session: Protocol Save Screen Bitmap Cache Hits
SSBCRDS Terminal Services Session: Protocol Save Screen Bitmap Cache Reads
TPRCHRT Terminal Services Session: Total Protocol Cache Hit Ratio
TPRCHTS Terminal Services Session: Total Protocol Cache Hits
TPRCRDS Terminal Services Session: Total Protocol Cache Reads
Table: NTWSRVC
CFAUSR0 Web Service: Maximum CAL count for authenticated users
CFAUSRS Web Service: Current CAL count for authenticated users
CRQSTSC Web Service: Copy Requests/sec
LCKERSC Web Service: Locked Errors/sec
LCNCTN0 Web Service: Maximum CAL count for SSL connections
LCNCTN1 Web Service: Total count of failed CAL requests for SSL connections
LCNCTNS Web Service: Current CAL count for SSL connections
LCRQSSC Web Service: Lock Requests/sec
MKRQSSC Web Service: Mkcol Requests/sec
MRQSTS0 Web Service: Move Requests/sec
OPTRQSC Web Service: Options Requests/sec
PRPRQS0 Web Service: Proppatch Requests/sec
PRPRQSC Web Service: Propfind Requests/sec
RFAUSRS Web Service: Total count of failed CAL requests for authenticated users
SRQSTSC Web Service: Search Requests/sec
SRVUPTM Web Service: Service Uptime
TCRQSTS Web Service: Total Copy Requests
TLCERRS Web Service: Total Locked Errors
TLRQSTS Web Service: Total Lock Requests
TMRQST0 Web Service: Total Mkcol Requests
TMRQST1 Web Service: Total Move Requests
TORQSTS Web Service: Total Options Requests
TPRQST1 Web Service: Total Propfind Requests
TPRQST2 Web Service: Total Proppatch Requests
TRRQSSC Web Service: Trace Requests/sec
TSRQSTS Web Service: Total Search Requests
TURQSTS Web Service: Total Unlock Requests
UNLRQSC Web Service: Unlock Requests/sec

Creating and Installing a Collector Package

This document is now located in "Extensions to SAS IT Resource Management in Part 2: Administration of the IT Resource Management User's Guide".


Weblog enhancements

Because IT Service Vision and Webhound are now separate products, changes have been made in the tables that contain web site data, in order to streamline the IT Service Vision product. These changes have the added benefit of improving the processing and reporting efficiency of web site data. In the IT Service Vision QuickStart Wizard, changes have been made to support these table changes. Specifically, the gallery structure has changed and new report definitions have been created. Several of the new report definitions utilize new features, such as veritcal reference lines and interactive graphics by means of the Java device driver. In addition, all of the new report definitions use two of the new palettes added to PGMLIB.PALETTE.

This section details the table and reporting changes for the weblog collector.

Web Site table changes

Table 1.1 - List of tables whose kept status has changed to NO.

Table Name
WEBBRW
WEBORG
WEBORG1
WEBORG2
WEBORG3
WEBORG4
WEBRSC2
WEBRSC4
WEBRSC5
WEBRSC6
WEBRSC7
WEBRSC8
WEBRSC9

Web Site gallery changes

The gallery's new title is "Web Site Analysis Reports" and has a new directory structure. While the Overview, Exceptions, and Trends folders remain, the other folders have been streamlined and have the following folder structure:

Web Site report changes

A set of new reports has been created for the Web Site Analysis Report Gallery. The new reports begin with the letters QWS and use the naming convention described in table 1.2. The older report definitions begin with the letters QWD and are no longer supported by IT Service Vision 2.5. They are not copied into the admin.itsvrpt folder. However, for historical purposes, you can find them in pgmlib.itsvrpt. No changes have been made to the exception rules or the exception reports.

Several of the new report definitions are set to use the new Java device driver to create an interactive graph when the output mode is set to WEB. In order to display the Java graph, the htm page accesses the Java drivers installed on your site. To ensure remote access to these graphs, set the appletloc option in the IT Service Vision QuickStart Report job, wreport.sas on UNIX and PC or WREPORT on OS/390, to point to the site where these drivers are installed.

Refer to table 1.3 for a list of report names that use the Java Device Driver by default. Refer to the technical support document TS601 for more information on the Java device driver and for a list of browser versions supported.

If you want to de-select the Java Device Driver for a report definition, follow the steps below.

Note that unless specified otherwise, GIF733 is the default device driver for the enlarged size graph in web output.

Table 1.2 New QuickStart report naming convention

Char Pos Char Value Description
1-3 QWS New 2.5 QuickStart web site reports
4 S,F,U,T Report category: Service, Traffic, Usage, Trends
5 C,T,B,P,Q,R,L,B,P,X Analysis or class variable: Codes, Timetak, Bytes, Pages, Requests, Resources, Levels, Browsers, Platforms, Multiple
6 M,S,D,O,X,N By variable: Machine, Site, Date, Other, Multiple, None
7 H,V,P,T,L,S Graph type: Horizontal bar, Vertical Bar, Pie, Table, Plot, Spectrum
8 1,2,3 Unique Report designator number

Table 1.3 Reports where the Java Device Driver is used by default

Report Name Report Description
QWSFBSV1 Site Total MBytes
QWSFPSV1 Site Total Pages
QWSFQSV1 Site Total Requests
QWSUBNH2 All Browsers
QWSUBNH3 Microsoft IE Browsers
QWSUPNH2 All Browser Platforms

New Palettes

The new QuickStart reports use two new palettes, PGMLIB.PALETTE.SOLID and PGMLIB.PALETTE.SOLIDSYM. Both palettes have 24 unique solid colors defined. Note that PGMLIB.PALETTE.SOLIDSYM defines symbols and PGMLIB.PALETTE.SOLID does not. To avoid a cluttered look, many of the new plot report definitions use PGMLIB.PALETTE.SOLID. Be aware that when symbols are not used, it is possible to see an empty graph when a graph has a single point. This may occur in the weekly trend reports in a new PDB which has less than one week of data. In this case, you may change the palette used to PGMLIB.PALETTE.SOLIDSYM or you may wait until you have processed additional data, after which two points will enable a plot to be displayed.


SMF Data Processing on UNIX and Windows

You can now use the %CPPROCES macro to process your SMF data on UNIX and Windows platforms if you have MXG available at your site for correct operation on your platform. You must also transport the SMF data to your operating system's environment. Details on this can be found below.

You can also use the %CPPROCES macro to process your SMF data on OS/390. The code that is executed is the same that is used by the %CMPROCES macro. You can also continue to use the %CMPROCES macro unchanged, but you cannot use it on UNIX or Windows. The only difference between the use of the two macros is that the name of the SMF data file cannot be specified as the first positional parameter on the %CPPROCES macro. It can, however, be specified as the value to the RAWDATA= parameter instead.

Before considering whether you should process your SMF data on UNIX or Windows, it should be remembered that the OS/390 platform is the best place to process your SMF data. Not only can it handle the large volumes of data and virtual memory required to process it, but it provides the automated job and file management facilities (catalogs, backups, recovery procedures) that the other platforms do not - they would require more human intervention not only to fix problems, but also to maintain smooth day to day operation.

The SMF data must be downloaded correctly for it to be made available to IT Service Vision. SMF data files cannot be downloaded directly because they have a record format (RECFM) of VBS and the downloaded data would lose its BDW (Block Descriptor Word) and RDW (Record Descriptor Word). This would then make the data  impossible to read. Instead, you should convert them to RECFM=U by using the IEBGENER (or similar) utility. Specifying DCB=(RECFM=U,BLKSIZE=32760) on both the input and the output achieves the correct result, as follows (the necessary disk allocation parameters have been removed from this example for brevity):

        //SYSUT1 DD DSN=input.smffile,DISP=SHR,
        // DCB=(RECFM=U,BLKSIZE=32760)
        //SYSUT2 DD DSN=output.smfcopy,DISP=(NEW,CATLG),
        // DCB=(RECFM=U,BLKSIZE=32760)

This copied SMF file can then be downloaded, for example, using ftp, with no EBCDIC to ASCII conversion (sometimes referred to as BINARY mode).

Add the MXGSRC= and MXGLIB= parameters to your %CPSTART macro. These parameters point IT Service Vision to your MXG source library and formats catalog respectively. Refer to the Macro Reference documentation for details.

Then point your %CPPROCES macro to your SMF data. This can be done in one of two ways:

The details regarding the downloading of the SMF data can be found in MXG newsletter 25, dated March 26, 1994 which can be also be found by browsing the NEWSLTRS member of the MXG source library.

NOTE: If you are executing MXG 19.02 or earlier on a UNIX platform, you will receive an error message when the %CPSTART macro tries to execute the VMXGINIT MXG member. Change 19.128 (implemented in 19.03 and above) fixes a problem with the way VMXGINIT references a file called AAAAAAAA.SAS. The file references are case-dependent on UNIX and all the MXG files have lower case names. The fix introduced by 19.128 inserts a %LOWCASE macro into the INFILE statement on line 1929 so that it reads:

        INFILE SOURCLIB(%LOWCASE(AAAAAAAA.SAS)) LENGTH=LENGTH COL=COL;


Updated SAP R/3 Collector documentation

This is an updated version of the document describing how to extract SAP R/3 performance data that was first published with IT Service Vision 2.3.

Contents:

Overview

An SAP R/3 system writes its performance data into a file called a statistic file or stat file. CCMS (Computing Center Management System) gets its data out of this file, presummarizing it as it does so. IT Service Vision reads the statistic file directly to get more detailed data than CCMS.

Up until release 3.1h of SAP R/3 it was possible to read the stat file directly, but since release 4.0 the stat file is compressed and can only be read via SAP R/3 RFCs (Remote Function Calls) that decompress it while being read. These function calls also allow the data to be read directly from within the R/3 system without having to depend on an external copy of the stat file.

The IT Service Vision solution for reading SAP R/3 performance data consists of an ABAP program on each R/3 application server and a C program on the IT Service Vision server. The ABAP program wakes up every hour, reads the data and sends it to a port or socket to which the C program is attached. It also records the timestamp in a control dataset and uses that timestamp as the start time for subsequent extractions. The C program receives the data and stores it in an appropriate location on the IT Service Vision server ready to be processed overnight. The combination of the ABAP and C programs acts like a daemon process. It registers itself in the R/3 system on the application server by using a rfc-destination and a unique id.

The original support for reading the stat file is still available, but you will need to consider transitioning to the new RFC method, especially if Version 4 of R/3 is to be installed. Because the format of the extracted data files is different, you need to select different parameter values on the %CPPROCES macro to differentiate. To process the external copy of the stat file, continue to use TOOLNM=STATBIN. To process the newly extracted format, code TOOLNM=STATRFC. The two file formats are incompatible and cannot be mixed. That is, you cannot process RFC-generated data with TOOLNM=STATBIN and you cannot process a stat file with TOOLNM=STATRFC. But you can continue to use any or all SAP R/3 IT Service Vision tables using either method.

 

Processing the Version 3 stat file

This is the information which was originally included in the Online Help which details the naming convention necessary for processing stat files. It has been copied here for your convenience.

When processing SAP R/3 stat files, there are some macro variables that are required and some that are optional. The one that is always required is SAPVER. This is used to specify the version of SAP R/3 that produced the stat file (it is not detectable from within the stat file). Different versions of R/3 have different formatted stat files, so this variable must be specified. For example, for version 3.1h you would code:

%let SAPVER=3.1h;

One of the optional macro variables is SAPPLAT. It must be used if you are processing data that has been generated (or your SAP R/3 system runs) on a Windows platform. If this is not the case, do not specify it. Code it as follows:

%let SAPPLAT=WIN;

If you want to process multiple stat files (or a directory of raw data files) then your file names must conform to the naming conventions specified below. You also do not need to specify the other optional macro variables. In fact, if you do specify them, you will receive an error message and processing will stop.

If, within one SAS session, you call the %CPPROCES macro to process one raw data file and then call the %CPPROCES macro again to process multiple raw data files:

%let SAPHOST=;
%let SAPSYSNM=;
%let SAPSYSNR=;

This is because the SAPHOST, SAPSYSNM, and SAPSYSNR macro variables are global macro variables, and thus remain set until changed, so your values from the first execution might inadvertently be set for the second.

To process a single stat file, use the %CPPROCES as follows:

%let SAPVER=3.0c;
%let SAPSYSNM=FINANCE;
%let SAPSYSNR=01;
%let SAPHOST=sapsrv01;
%CPPROCES( sapr3s,
           collectr=sapr3,
           toolnm=statbin,
           rawdata=/usr/tmp/statfile.sap );

To process multiple stat files, use the %CPPROCES as follows:

%let SAPVER=3.0c;
%CPPROCES( sapr3s,
           collectr=sapr3,
           toolnm=statbin,
           rawdata=/usr/tmp/statfiles/ );

 

Components of the SAP R/3 RFC interface

The three main components of the interface are:

 

  1. The RFC program runs on the machine which will receive the stat file data. Normally, this will be the IT Service Vision server.
  2. The RFC program registers itself with the RFC gateway with a unique ID, then waits for the ABAP program to start sending the data.
  3. The batch variant of the ABAP program runs on each instance of each R/3 application server. It contains appropriate parameters for the period of the extract and the destination of the data. Each run creates a uniquely-named extract file on the machine running the RFC program.

 

Installation

Installation and registration of these features requires SAP R/3 administrator privileges.

  1. On the SAP R/3 system, set up an RFC Destination using transaction code SM59. Create a new entry under the TCP/IP section and name it as you wish. Select a connection type of "T" for "TCP" and enter the appropriate user and password information. On the next screen, select the "Registration" option and, in the "Program" field that subsequently displays, enter systemname.sasrexec, where systemname is the name of the SAP R/3 system on which you are running. Only one RFC destination definition should be required for each SAP system - the multiple batch variants defined below should share this RFC destination and the program associated with it.
  2. For each application server or instance, create an ABAP program using transaction code SE38. Name the ABAP program ZSASITSV and use the code of the same name found in the "misc" or "sasmisc" subdirectory in the IT Service Vision product image. You can either point the transaction screen to the ABAP code or use cut & paste to enter the code directly.
  3. Create a batch variant of the above ABAP program and enter the parameters for it as follows:
    • G_DEST

      This is required. It should be the name of the RFC Destination (not the program associated with the destination) created in the previous step. The destination is case sensitive - the destination name may have been upper-cased by the SAP R/3 system.
    • G_SDAT, G_STIM, G_EDAT, G_ETIM

    These are optional, since the last hour’s data will be extracted by default. If you create an online variant (for testing purposes) they are required.

    • G_PATH

    This is required. It points to the output directory on the IT Service Vision server where the RFC daemon will be running. Because the performance data contains no information about the SAP R/3 server name, sub-system number (gateway service) or system name, the directory name for the data from each application server must be uniquely named following this convention:

      /preceeding-qualifiers/system-name_server-name_system-number/

      For example:

      /sapdata/c11_wktest01_20/   on a unix system
      c:\sapdata\c11_wktest01_20\   on a PC system

      where "c11" is the SAP R/3 system name, "wktest01" is the SAP server name and "20" is the SAP system number (also known as the gateway service number). The files within these directories will be dynamically named dependent on the data file being fetched and the timestamp of the fetch, as follows:

      	pfnorm_20001027_190000_20001027_200000.dat

      NOTE: The value for G_PATH must end in a slash (\ or /) because the generated file name is appended to its value. If a slash is not entered the program will look for a directory that doesn't exist.

    • G_LAST_S

    This is required. It is used to point to the location of a control dataset that stores the timestamp of the last data extraction. The file is created (if it doesn't exist) and is written to by the SAP R/3 system (not the IT Service Vision server). The file should be called sapsave.dat and the path should again conform to follow the file-naming convention above. For example:

    /tmp/system-name_server-name_system-number/sapsave.dat         if SAP R/3 is running on unix
    c:\temp\system-name_server-name_system-number\sapsave.dat      if SAP R/3 is running on a PC

    Do not schedule this batch variant yet. Once everything has been set up and tested, you will then want to schedule it to run every hour.

  4. Create a tree of directories on the IT Service Vision server machine to receive the performance data. It is suggested that the parent directory have a name such as "sapdata". Other, higher level qualifiers can be added as necessary, but underneath subdirectories must be created according to the naming convention described above in the details for the G_PATH parameter description. It will be the "sapdata" directory with which you will subsequently associate the RAWDATA parameter of the %CPPROCES macro.
  5. Create a directory on the IT Service Vision server machine and copy into it the sasrfcex RFC program from the "misc" or "sasmisc" subdirectory in the IT Service Vision product image. On UNIX platforms, there are multiple copies of the sasrfcex executable present in that directory, all prefixed with a 3-character abbreviation for the operating system variant - "aix_" for AIX, "alx_" for Compaq Tru64, "hpux_" for HP-UX and "sol_" for Solaris. Ensure the correct executable is copied. On PC platforms, there is only one copy. Be sure it is stored in a location from where it can be executed by the rfcstrt.sh or rfcex.bat script (detailed in the next step). It will need to be either in the PATH list or executed directly by specifying its exact location.
  6. Copy the script called rfcstrt.sh (if IT Service Vision is running on UNIX) or rfcex.bat (on PC) from the "misc" or "sasmisc" subdirectory in the IT Service Vision product image. Store it in a convenient location form where it can be executed to call the sasrfcex program. Then, tailor the script. The path to the sasrfcex program should be reviewed and modified as necessary. Then, modify the execution parameters, as follows:
    • the -a parameter should be set to the name of the program associated with the RFC Destination created above,
    • the -g parameter should be set to the name of the SAP Gateway and
    • the -x parameter should be set to the name of the Gateway service.

    Note that the -a parameter refers to the Program associated with the RFC Destination, not the destination's name itself.

    The SAP Gateway is normally the same as the name of the SAP R/3 Application Server and must appear in the local etc/hosts file. Its name and definition can be checked by scanning for its entry in the TCP services file on the R/3 Server.

    The Gateway service should already be defined in the SAP R/3 Application Server's TCP services file - it must also be defined in the IT Service Vision server's TCP services file.

    For testing, there is another parameter, -t, that produces a trace of the RFC program's activity in the directory where the shell script resides. Simply adding the -t parameter creates a file called dev_rfc (on UNIX) or dev_rfc.trc (on PC) in the directory in which the script was started. This trace file is appended to if it already exists. Be sure to clear or delete it before rerunning the script.

    Example for the rfcstrt.sh script on Unix

    #!/bin/sh
    ##########################################################################
    RFC=/opt/rfcsdk
    export RFC
    OLDPATH=$PATH:
    PATH=$RFC:$RFC/bin:/dsk/md/tmeset/d18/sasdata/bin:$PATH
    export PATH
    sasrfcex -a wktest01.sasrexec -g wktest01 -x sapgw20
    PATH=$OLDPATH
    export PATH

    Example for the rfcex.bat batchfile on Windows NT

    @set RFC=c:\programme\sappc\sapgui\rfcsdk
    @set OLDPATH=%PATH%
    @set PATH=%RFC%;%RFC%\bin;%PATH%
    sasrfcex.exe -a wktest01.sasrexec -g 194.55.88.19 -x sapgw20
    @set PATH=%OLDPATH%

  7. On the IT Service Vision server, start the rfcstrt.sh or rfcstrt.bat script. The script will load the RFC program which will sit and wait for incoming data.

    Then, on the SAP R/3 system, execute the batch variant of the ZSASITSV program. Data should be sent to the RFC and stored in the appropriately-named directories on the IT Service Vision server. You should see messages in the log window indicating success or failure.

    If it succeeded, you should see files prefixed pfnorm in the location specified in the G_PATH parameter. You may also see files prefixed pfbtc - these can be safely ignored and should not take up much disk space, if any.

    If it failed, a return code should appear in the SAP log. The most common failures are codes of:

    1
    system failure - this could be where the name of the destination was not found or the program/script file was not running.
    2
    communications failure - this could be an incorrect gateway or gateway service specification or that the SAP R/3 server was not found on the network.

    To correct a failure, stop the script execution, change any parameters necessary and restart the script. Then, on the SAP R/3 server, re-execute the batch variant and re-check the log. More information can be found by browsing the dev_rfc file on the IT Service Vision server, even if the script is still executing.

  8. Once you have successfully transferred data, you should consider scheduling a job to execute the ABAP code using transaction code SM36. Name the job to suit your needs and specify the job class and target host (the name of the SAP R/3 server to run on). Create a step and specify the ZSASITSV program and the name you chose for the batch variant. Then, schedule the job to start at the top of the next hour and select "periodic" to specify that you want it to run hourly thereafter. Be sure to save the job.
  9. Use transaction code SM37 to check that the job scheduled correctly. Once data starts accumulating, you can start IT Service Vision and run your process job to read all the raw data files. This process can be scheduled to run on a regular basis. After each execution, you will want to consider moving the data files out of their current location to avoid them being read in subsequent executions. Refer to the IT Service Vision Macro Reference for details on specifying the %CPPROCES macro - there is a parameter (DUPMODE=) which prevents duplicate data from being inadvertently passed into the PDB. See also below for details on using %CPPROCES.
  10.  

    PC and UNIX naming convention

    To re-iterate, the data must be stored in directories according to the following naming convention:

      /preceeding-folders/system-name_server-name_system-number/

    The process macro will then parse the location for these values and store them in the data. For example:

    /sapdata/c11_wktest01_20/   on a unix system
    c:\sapdata\c11_wktest01_20\   on a PC system

    where "c11" is the R/3 system name, "wktest01" is the SAP server name and "20" is the SAP system number (also known as the gateway service number). The files within these directories will be named automatically dependent on the data file being fetched and the timestamp of the fetch.

     

    MVS naming convention

    If the data is to be processed on an OS/390 or MVS system, the data should be uploaded there in binary form and stored in either a sequential file or a PDS. If a sequential file is chosen then the following naming convention must be used:

    leading-qualifiers.c11.wktest01.N20.PFNORMnn

    If a PDS is chosen, then the following naming convention must be used: If a PDS is chosen, then the following naming convention must be used:

    leading-qualifiers.c11.wktest01.N20(PFNORMnn)

    In both cases, "c11" is the R/3 system name, "wktest01" is the SAP server name and "20" is the SAP system number (also known as the gateway service number). Notice that the system number must be prefixed with an N since MVS dataset qualifiers cannot start with a number. Also notice that the last qualifier or PDS member name can be suffixed with any number to distinguish it from other similar files or members.

     

    How to get the data into the IT Service Vision PDB

    The SAP R/3 data must be stored in a location whose name follows the naming conventions listed above. Failure to do so will cause the process macro to not find your data.

    If you are currently processing stat file data and are implementing this new RFC support, you must change the external name of your SAPR3S table in your IT Service Vision data dictionary. The external name is used to match the R/3 table names with those of IT Service Vision's. Failure to do so will result in a failure of %CPPROCES. Use the following code to achieve the change:

    %cpcat;
    cards4;
    update table name=sapr3s
    extname=PFNORM;
    stop;
    ;;;;
    %cpcat(cat=work.ddutl.sapr3.source);
    %cpddutl(entrynam=work.ddutl.sapr3.source);

    In IT Service Vision 2.3, there is a new QuickStart Wizard for SAP R/3 data which can be used to easily load data and produce web galleries. This support also includes five new pre-summarized interval tables which provide better support for handling large data volumes. They are SAPHST, SAPSMT, SAPTSK, SAPSYS anbd SAPTRN. For more details, use the online GUI to explore the supplied data dictionary or just run the QuickStart Wizard.

    There are three possibilities when processing data from SAP R/3 using the new RFC method:.

    • Process just one data file: IT Service Vision will ascertain the system name, server name (host name) and system number from the file's parent directory.
    • Process one directory of data files: IT Service Vision will use the directory name to ascertain the system name, server name (host name) and system number.
    • Process the parent directory that contains subdirectories of data files: IT Service Vision will open each subdirectory and use its name to ascertain the system name, server name (host name) and system number for each of the files within it.

    To specify any of these data locations, either:

    • Code a FILENAME statement for RAWDATA pointing to the location, or
    • Specify the RAWDATA= parameter on the %CPPROCES macro.

    For PC and UNIX, RAWDATA should refer to either the single file; single directory or parent directory. For MVS, RAWDATA should refer to either the single file or PDS name.

    Use the %CPPROCES macro to process the data into the IT Service Vision PDB. Preceed the macro with a macro variable setting that specifies the version of SAP R/3 that is installed on the systems from where the data was extracted. Optionally, add another macro variable setting if (and only if) the data was generated on a Windows platform:

    %let sapver = 4.0a;   /* Version of SAP R/3   */
    %let sapplat = WIN;   /* Specify only if data was indeed */
                          /* generated on a Windows platform */
    %cpproces(rawdata="sapdata/",
              collector=SAPR3,
              toolnm=STATRFC,
              dupmode=DISCARD);

    For optimum performance, remember to clean out these files and/or directories for subsequent process executions.