IT Service Vision 2.2 Collector Updates


Contents:

Preparing Measureware and NTSMF PDBs for use with the QuickStart Reports

NTSMF Conversion

Patrol Support

Enhanced NTSMF Support

Using Generate Source to construct Patrol and NTSMF tables

HP Measureware Updates

NTSMF Updates


Preparing Measureware and NTSMF PDBs for use with the QuickStart Reports

IT Service Vision 2.2 introduced the QuickStart Wizard to make it easier for customers to setup a new PDB and produce reports on it. The Wizard does this by creating a PDB with the batch jobs for updating and reporting on it based on customer responses to questions about their data. Customers with a pre-existing PDB can also use the QuickStart reports on the data in the PDB if they update the PDB as described here.

To prepare for using the QuickStart reports on an existing PDB, perform the following steps:

  1. Back up the PDB.

  2. Compare your shift definitions to the shift definitions used in PDBs created by the QuickStart wizard. The QuickStart shift definitions are:
    Shift 1, WEEKDAY: Mon-Fri, 8am-5pm
    Shift 2, WEEKNIGHT: Mon-Fri, 5pm-8am
    Shift 3, WEEKEND: Sat-Sun

    If your shift definitions are substantively different from these, some of the QuickStart reports may not appear as you expect. If you do not want to alter your existing shift definitions, you will have to update the shift references in those QuickStart reports that depend on them. Use the Manage Reports tab in the ITSV user interface to do this.

  3. (Optional performance changes) In ITSV 2.2, changes were made to the default class lists and agelimits used in some tables to improve the performance of the reports. The changes necessary to get some of these performance improvements can result in a great deal of data being altered or removed from an existing PDB, so they are not included in this update process by default.

    To do just the standard updates to allow QuickStart reports to run on your existing PDB, skip on to Step 4. To enable the optional performance improvement changes to a table in an existing PDB, review the contents of the catalog entry named PGMLIB.JSWIZCAT.table.SOURCE, where "table" is the name of the table in the existing PDB that you want to run QuickStart reports on. The more invasive performance changes have been commented out of that entry. Search for the string "performance" in the entry and follow the instructions there to enable the change. Save the edited version of the QS update entry in a catalog entry of your creation named PLAYPEN.JSWIZCAT.table.SOURCE. The %QSREADY macro you run in the next step will pick up the edited version of this catalog entry from PLAYPEN.JSWIZCAT instead of PGMLIB.JSWIZCAT.

  4. Run the %QSREADY macro on the PDB as follows:

            * Allocate the PDB with write access and load the QSREADY macro;
    
            %CPSTART( pdb=/my/pdb, mode=batch, access=write);
            filename qsready catalog "PGMLIB.JSWIZCAT.QSREADY.SOURCE";
            %include QSREADY;
    
            * Allocate the playpen library containing your personalized JSWIZCAT entries;
            * (if you created an entry in this catalog in Step 3);
    
            libname playpen "/my/playpen";
    
            * Report on what updates are needed in your PDB;
    
            %QSREADY ( );
    
            * Update selected tables in the PDB or... ;
    
            %QSREADY ( tablename1 tablename2 ... tablenameN );
    
            * ... or update all tables in the PDB or...;
    
            %QSREADY ( _ALL_ );
    
            * ... Report on what updates are needed;
    
            %QSREADY ( );
    

    This will update your PDB and make it ready for QuickStart reports.

  5. Run the QuickStart Wizard from the ITSV user interface, specifying the existing PDB when asked for the new PDB pathname. This will create the QuickStart batch jobs and web-based report framework in directory PDB/qs but will not alter the data in existing PDB tables. Though the wizard will create both a process/reduce job and a report job, you will use only the report job.

  6. Locate and run the QuickStart batch report job according to the instructions shown in the README file created by the wizard (pathname PDB/qs/cntl/README). This job will create web-based reports in the qs directory for the data in your PDB.

  7. If a QuickStart report references a variable that is not in your PDB or that you have not been getting a value for from your collector, the report on that variable will not, of course, be produced by the QuickStart report batch job. To produce the report, you need to add the variable to the PDB (do this via the ITSV online user interface or the %CPDDUTL macro) and update your collector to start collecting the metric (do this by including the name of the metric in the report parm file used by the MeasureWare extract command) . Once your PDB is populated with data for that metric and your PDB has marked the variable "KEPT", the QuickStart report job will produce the report.

  8. If you decide you want to add ALL the MeasureWare metrics used in the QuickStart PDB to your PDB, you can perform the steps described in Step 7 for each metric, or do the following:

    • Include the MeasureWare metrics named in !SASROOT/misc/cpe/reptfile (on Unix) or !SASROOT/addon/cpe/sasmisc/reptfile (on Windows) in your MeasureWare extract report parm file. Some of the metrics listed in this file probably duplicate metrics already in your extract report parm file, so use care in merging this list of metrics with your existing list.

    • Define the metrics in the new, merged extract report parm file to your PDB using the SAS program in !SASROOT/misc/cpe/rep2dic.sas (on Unix) or !SASROOT/addon/cpe/sasmics/rep2dic.sas (on Windows).

    • Use the new, merged list with extract for your next process run. This step updates your PDB based on the contents of a MeasureWare extract report parm file. To do the reverse (create a MeasureWare extract report parm file for the variables that are KEPT=YES in your PDB), you can use the program in !SASROOT/misc/cpe/dic2rep.sas (on Unix) or !SASROOT/addon/cpe/sasmics/dic2rep.sas (on Windows). Though this program is probably not of much use to you in preparing your PDB for use by the QuickStart reports, it is a useful program in some other situations.


NTSMF Conversion

Important Note to existing NTSMF customers

If you are a new customer running IT Service Vision Release 2.2 or you are upgrading to this release and have no existing NTSMF tables defined then you can ignore this section. Also, this conversion does not apply to customers who stage their NTSMF data on MVS using the MXG tool.

It is recommended that you undertake the necessary changes to use the Enhanced NTSMF support. The existing IT Service Vision Release 2.1 support was static in that it expected the NTSMF object record formats to remain constant with new counters being added to the end of each record. Unfortunately this has not been the case, depending on the release of the software populating the NTSMF data record for a particular object, counters have been moved, deleted , inserted and/or renamed. This is why Enhanced NTSMF support requires that your NTSMF logs contain NTSMF Discovery records.

If you have been using the NTSMF support provided in IT Service Vision 2.1 or earlier, then you will have to undertake a conversion to take advantage of the above new features. The existing NTSMF support is included in IT Service Vision Release 2.2 so your existing processes and reports will continue to work without the conversion.. Please see the conversion checklist for more details.

If you currently process your NTSMF data from a file that is a concatenation of several separate log files then it is recommended that you now maintain all your NTSMF log files as individual files for the following reasons :-

Overview

This document walks through the process of converting IT Service Vision V2.1 NTSMF tables to V2.2.

 

New Features provided in IT Service Vision V2.2 for NTSMF

NTASPRT - Windows NT Ras Port
NTASTTL - Windows NT RAS Total
NTBFCTR - Windows NT Benchmark Factory
NTCMNGR - Windows NT Caching Manager
NTCONF - NTSMF Configuration table
NTDTBS -  Windows NT Database
NTFLTRN - Windows NT Packet Filtering
NTFTPSV - Windows NT FTP Service
NTIMAGE - Windows NT Image
NTMSEES - Windows NT MSExchangeES
NTSSRVR - Windows NT WINS Server
NTTDTLS - Windows NT Thread Details
NTWSRVC - Windows NT Web Service
NTRADSR - Windows NT RADIUS Server
NTARTMS - Windows NT SNA 3270 Response times

 

Do I need to run this conversion ?

 

Q1. Do you have any NTSMF tables defined in my PDB (with a COLLECTOR value of WINNT)   ?

Yes - You need to convert, go to question 2.
No  - No need to convert this PDB as no NTSMF tables have been defined..

Q2. If you have been or are about to update any software on your NT Servers, it is possible that they will produce different NTSMF records formats than previous software versions. These new format NTSMF records may not be supported by the ITSV2.1 staging code. Have you already or are you likely to upgrade any software on your NT servers ?

Yes - You need to convert, go to question 3.
No  - No need to convert this PDB.

Q3. Do you want to keep the existing data in my NTSMF tables ?

Yes - You need to run the conversion.
No - delete all your NTSMF tables (with a collector value of WINNT) and add them again using IT Service Vision 2.2 to obtain the new definitions.

 

How to convert NTSMF tables in a PDB

This conversion process will have to be run for each PDB that contains NTSMF tables that were added using IT Service Vision 2.1 or earlier. The actual conversion process can be run interactively or in batch.

  1. Install IT Service Vision 2.2

    You will still be able to process your existing NTSMF tables as usual under this version. The conversion process requires that you are at the 2.2 release. IT Service Vision Release 2.2 contains both the original and the enhanced staging code. One of the updates made to the original staging code is the ability to handle NTSMF discovery records.
  2. Before moving onto the next stage ensure all NTSMF logs are recording NTSMF Discovery records.

    For details on activating NTSMF Discovery records refer to your NTSMF documentation.
  3. Backup Your PDB prior to conversion

    The conversion process will potentially be making a large number of updates to your PDB's data dictionary so we recommend that a full PDB backup is taken.

    Run conversion in UPDATE=N mode.

    Running in this mode performs no updates but does produce a report of changes and highlights any potential conversion issues prior to the dictionary being updated. At this point you do not need write access to the PDB. If you already have IT Service Vision active then there is no need to run the %cpstart macro again.

    %cpstart(pdb=pdb-name,root=root-location,access=readonly,mode=batch,_rc=cpstrc);
    %put 'CPSTART Return code is ' &cpstrc;
    filename tmp catalog 'pgmlib.ntsmf.convert.source';
    %include tmp;
    %cpntcnv(update=N);
    filename tmp clear;

    Once the above has run, a report is produced in the SAS log reporting any ERRORS or WARNINGS in the conversion. Please review this report.
  4. Run conversion in UPDATE=Y mode.

    This process will require that the PDB be allocated with write access and will run longer than before due to the updates being performed.

    %cpstart(pdb=pdb-name,root=root-location,access=write,mode=batch,_rc=cpstrc);
    %put 'CPSTART Return code is ' &cpstrc;
    filename tmp catalog 'pgmlib.ntsmf.convert.source';
    %include tmp;
    %cpntcnv(update=Y);
    filename tmp clear;
  5. Update existing CPPROCES code.

    If you have defined batch files for processing your NTSMF data you will have to update the COLLECTR= and TOOLNM= parameters to ensure that the enhanced processing code is used. These updates must be performed as the old processing code will not work with the new definitions.

    In addition to these parameter changes, we recommend that you update the RAWDATA parameter to point to a directory and not use wildcards. IT Service Vision will automatically process all the NTSMF logs in that directory. For example :-

    Before Conversion :-                                                            After Conversion :-

    %CPPROCES(,COLLECTR=WINNT                                             %CPPROCES(,COLLECTR=NTSMF
                      ,RAWDATA=E:\NTSMF\CURRENT\*.SMF                                     ,RAWDATA=E:\NTSMF\CURRENT\
                      ,TOOLNM=NTSMF);                                                                         ,TOOLNM=SASDS);

    Note: After conversion you will be able to add the dupmode= parameter to your process macro.
  6. Update existing CPPROCES code to use input filtering (optional).

    If you decide to use Input Filtering, we recommend that you do the following before running your first %CPPROCES :-
    1. Bring up your NTSMF PDB interactively.
    2. Copy pgmlib.patrol.cpdupchk.source to admin.ntsmf.cpdupchk.source.
    3. To do this, submit the following code from your program editor.

      proc catalog cat=pgmlib.ntsmf;
      copy out=admin.ntsmf;
      select cpdupchk.source;
      quit;

    4. Review and update if necessary the following parameters for the %CPDUPCHK macro invocation contained in admin.ntsmf.cpdupchk.source. To do this type note admin.ntsmf.cpdupchk.source on the command line (or from the command box) make the necessary updates and SAVE and END out of the notepad window.

If you do not do the above, the first run of %cpproces with Input Filtering active will copy the default %CPDUPCHK invocation into the admin library automatically, and you will receive the following warning message recommending that you review the %CPDUPCHK parameter values.

WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
WARNING: DO NOT OVERLOOK THIS IMPORTANT WARNING - IT WILL NOT APPEAR AGAIN.
WARNING: A sample invocation of the %CPDUPCHK macro has been copied to
your ADMIN library. You should review its contents before the
next execution. To do so, start IT Service Vision with this
PDB in update mode and type "NOTE ADMIN.NTSMF.CPDUPCHK.SOURCE"
on the SAS command line. The only parameter values you need to
review and probably change are the RANGES=, SYSTEMS=, and KEEP=
settings. Review the comments therein for guidance and the
documentation on input filtering for more details.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING

If you require more information on Input Filtering please refer to the How To/Macro section from the online help for IT Service Vision.

       

Explanation of Output from Conversion

You may notice that variables such as INSTANC, PARENT and SYSTEM have been renamed.   INSTANC and PARENT have been renamed to be more informative. For example, for the Logical Disk object (NTLGDSK) the following changes have been made :-

INSTANC               =>          LGCLDSK
PARENT                =>          PHSCDSK

SYSTEM has been renamed to MACHINE to ensure that it matches all the other ITSV tables on all other platforms.

There is no need to update your existing reports as the old variable names will still exist in the tables as formula variables.

Below is a list of all the variable renamed by table name.

 

                   TABLENM    TABLE LABEL                      OLD NAME   NEW NAME

                   NTCNCT0    NBT Connection                   INSTANC    BTCNCTN
                   NTCNCTN    MSExchangeMTA Connections        INSTANC    CNNCTNS
                   NTINTRF    Network Interface                INSTANC    NINTRFC
                   NTIRSRC    NetBEUI                          PARENT     NETBEUI
                              NetBEUI Resource                 INSTANC    EUIRSRC
                   NTLGDSK    PhysicalDisk                     PARENT     PHSCDSK
                              LogicalDisk                      INSTANC    LGCLDSK
                   NTMSEDB    MSExchangeDB                     INSTANC    MSEXCDB
                   NTNBEUI    NetBEUI                          INSTANC    NETBEUI
                   NTNBIOS    NWLink NetBIOS                   INSTANC    WLNBIOS
                   NTPASPC    Process Address Space            INSTANC    PRADSPC
                   NTPCMTA    MSExchangePCMTA                  INSTANC    MSPCMTA
                   NTPGNFL    Paging File                      INSTANC    PGNGFL
                   NTPHDSK    PhysicalDisk                     INSTANC    PHSCDSK
                   NTPNTM     Pentium                          INSTANC    PENTIUM
                   NTPRCS     Process                          INSTANC    PROCESS
                   NTPRCSR    Processor                        INSTANC    PRCSR
                   NTPRTCL    MSExchange Internet Protocols    INSTANC    IPRTCLS
                   NTQLSLG    SQLServer-Log                    INSTANC    SQLSRLG
                   NTSGMNT    Network Segment                  INSTANC    NTSGMNT
                   NTSRWQS    Server Work Queues               INSTANC    SRVWRQS
                   NTSUSRS    SQLServer-Users                  INSTANC    QLSUSRS
                   NTTHRD     Process                          PARENT     PROCESS
                              Thread                           INSTANC    THREAD
                   NTWLIPX    NWLink IPX                       INSTANC    NWLNIPX
                   NTWLSPX    NWLink SPX                       INSTANC    NWLNSPX
  

The following is a sample of the output from converting the Logical Disk table from 2.1 to 2.2 format ( I have not shown all updates to keep the list short).

A similar report is produced for each NTSMF table that is converted. If no dictionary attributes change for a variable it is still listed in the report.

The column 'Dictionary Attribute' describes the meta data that is being changed. The remaining three columns list the existing value in the DICTLIB, the new value in IT Service Vision Replace 2.2 and the value that will/has been applied to the dictionary. If you have made any updates to the NTSMF tables prior to conversion, then these changes will be retained in the converted dictionary.

The fields that identify the different type of stats (default, day, week, month, year) are reported for completeness only and do not need to be understood. All that we are concerned with is the the string under UPDATED VALUE is a superset of the strings in EXISTING VALUE and VALUE IN VERSION 2.2.

The last line of the report details whether SYSTEM, INSTANC and/or PARENT variables have been renamed.

Report for Table name : NTLGDSK
-------------------------------

Variable : AVDBWRT
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

Format                                                BEST12.2                      BEST12.2
Subject                                               N/A                           N/A


Variable : DATETIME
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE


Variable : DOMAIN
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

Label                   Domain Name                   Domain name                   Domain name
Description             Domain Name                   LogicalDisk: Domain name      LogicalDisk: Domain name
Length                  32                            200                           200
Format                  $CHAR.
Subject                                               N/A                           N/A


Variable : DURATION
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE


Variable : FRMGBTS
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

Format                                                BEST12.2                      BEST12.2
Subject                                               N/A                           N/A

Variable : HOUR
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

Label                   HOUR                          Hour of day                   Hour of day
Description             Hour is a default variable    Hour_of_day                   Hour_of_day
Length                  3                             4                             4
Format                  BEST12.                       2.                            2.
Subject                                               N/A                           N/A

Variable : INSTANC
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

External Name           Instance                      LOGICALDISK                   LOGICALDISK
Label                   Instance                      LogicalDisk                   LogicalDisk
Description             Object Instance               LogicalDisk: LogicalDisk      LogicalDisk: LogicalDisk
Length                  40                            200                           200
Subject                                               N/A                           N/A

Variable : LSTPDATE
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE


Variable : PARENT
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

External Name           Parent                        PHYSICALDISK                  PHYSICALDISK
Label                   Parent                        PhysicalDisk                  PhysicalDisk
Description             Object Parent                 LogicalDisk: PhysicalDisk     LogicalDisk: PhysicalDisk
Length                  40                            200                           200
Subject                                               N/A                           N/A


Variable : WQLNGTH
-------------------

DICTIONARY ATTRIBUTE    Existing Value                VALUE IN VERSION 2.2          UPDATED VALUE

Format                                                BEST12.2                      BEST12.2
Subject                                               N/A                           N/A
Variables PARENT and INSTANC are about to be renamed to  PHSCDSK  and LGCLDSK
Variable SYSTEM is about to be renamed to MACHINE.

Creating formula variables for SYSTEM, INSTANC and PARENT as
necessary to ensure that existing reports will work correctly.
  

Variables interpretation types are used to determine how the data is summarized in reduction. Upon review of the existing variable interpretation types many have been updated to more accurately reflect the variable. For the majority of changes, there is little impact in changing the interpretation type. However, the following message will appear when a variable's interpretation type has been changed, and the information required to correctly re-summarize the historical data is not stored in the PDB, all new data added to the PDB will be fine. 

NB: This variables interpretation type has been updated to provide more
    meaningful information at the summary levels. Although all new data
    will be summarized correctly the use of the old summarized data
    should be used with caution.      

Patrol Support

Features

Overview

This document walks through the process of recording BMC's Patrol data into IT Service vision. The following stages will be covered along with working examples.

Prerequisites

BMC Patrol must be installed and collecting data in the UNIX and/or Windows NT environment. The data read into IT Service Vision comes from parameter history data maintained by the PATROL Agent. Refer to your PATROL documentation for more details. The extracted data can come from the Patrol History Loader KM, if installed, or extracted directly from the Patrol agent using the dump_hist command, both formats are recognized by IT Service Vision.

Patrol allows each metric to be sampled at it's own interval, typically 30 seconds, 1 minute, 5 minutes etc. This interval can be set by the Patrol administrator. IT Service Vision requires that the sample rates be specified on minute boundaries, the only exception being that we also recognize 30 second sample rates. (Please refer to Notes on Patrol data and it's summarization into IT Service Vision )

IT Service Vision Server must be installed at Release 2.2 or higher on MVS, Unix or Windows NT Server.

Data Extraction from PATROL

There are two approaches to collecting the PATROL history data to a central location.

  1. PATROL History Knowledge Module - This KM organizes the collection of the history data ensures that it is sent to a central server from where it can be extracted. Please refer to your BMC Patrol documentation on the Patrol History Loader KM for further information regarding this method.
  2. dump_hist.exe - This command extracts the same PATROL history data as option 1 but does not manage the transferal of the data to a central location. This option is useful it you prefer writing your own scripts to control the extraction and transferal of the data to a central location.

Although these 2 methods produce slightly different output, either or both can be processed by IT Service Vision.

It is the PATROL Operator Console that retrieves the historical data stored by the Agent, and the dump_hist line command that dumps the parameter history data maintained by the PATROL Agents. The PATROL Agent Reference Manual contains more detailed information on the dump_hist command.

The following command dumps parameter history data for 1 day a file using the start and end switches for the dump_hist command, the format of which are ddmmhhmm[yy]. Additional switches can be specified that further restrict the amount of data that is extracted. :-

dump_hist -s 0723000098 -e 0723235998 > filename

The following is a small example of the format of the text file created by the above dump_hist command. This is the file that will be passed to %CPPROCES.

nightingale/NT_CPU.CPU_0/CPUprcrUserTimePercent
    Thu Jul 23 10:00:57 1998 26.981
    Thu Jul 23 10:01:58 1998 5.35963
    Thu Jul 23 10:02:58 1998 0.598205
    Thu Jul 23 10:03:58 1998 0.333915
nightingale/NT_CPU.CPU_0/CPUprcrPrivTimePercent
    Thu Jul 23 10:00:57 1998 61.0279
    Thu Jul 23 10:01:58 1998 1.20528
    Thu Jul 23 10:02:58 1998 1.56053
    Thu Jul 23 10:03:58 1998 1.05312
nightingale/NT_SYSTEM.NT_SYSTEM/SYSsysTotalProcTimePercent
    Thu Jul 23 10:00:57 1998 88.013
    Thu Jul 23 10:01:58 1998 6.56211
    Thu Jul 23 10:02:58 1998 2.1812
    Thu Jul 23 10:03:58 1998 1.36592

Creating a PDB and adding tables

%cpstart(pdb=pdb-name,root=root-location,access=write,mode=batch,_rc=cpstrc);

%put 'CPSTART Return Code is ' &cpstrc;

%cpcat;
cards4;
add table name=ptntcpu;
add table name=ptlgdsk;
;;;;
%cpcat(cat=work.cpddutl.add.source);
%cpddutl(entrynam=work.cpddutl.add.source);

For MVS use the following %cpstart.

%cpstart(pdb=pdb-name,
root=root-location,
disp=new,
mode=batch
_rc=cpstrc);

 

Once the tables have been added, dictionary characteristics (age limits, variables kept status) can be modified either using the interactive interface or the %CPDDUTL macro.

 

Processing and Reducing Data into the PDB

The dumped PATROL data can be processed on any platform, MVS, UNIX or Windows NT Server, irrespective of which platform it originated. Once that data is moved to the appropriate platform the processing is identical.

Transferring the Data

The text file containing the dumped history data should be transferred to the platform on which it will be processed. If using FTP ensure that the data is transferred in ASCII mode.

Note MVS: Typically, PATROL data has variable length records, however, they are assumed not to exceed 200 bytes in length. So allocated an appropriate MVS file with an LRECL of 200.

Input Filtering

If you decide to use Input Filtering, we recommend that you do the following before running your first %CPPROCES :-

  1. Bring up your PATROL PDB interactively.
  2. Copy pgmlib.patrol.cpdupchk.source to admin.patrol.cpdupchk.source.
    To do this, submit the following code from your program editor.

    proc catalog cat=pgmlib.patrol;
    copy out=admin.patrol;
    select cpdupchk.source;
    quit;

  3. Review and update if necessary the following parameters for the %CPDUPCHK macro invocation contained in admin.patrol.cpdupchk.source. To do this type note admin.patrol.cpdupchk.source on the command line (or from the command box) make the necessary updates and SAVE and END out of the notepad window.

If you do not do the above, the first run of %cpproces with Input Filtering active will copy the default %CPDUPCHK invocation into the admin library automatically, and you will receive the following warning message recommending that you review the %CPDUPCHK parameter values.

WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
WARNING: DO NOT OVERLOOK THIS IMPORTANT WARNING - IT WILL NOT APPEAR AGAIN.
WARNING: A sample invocation of the %CPDUPCHK macro has been copied to
your ADMIN library. You should review its contents before the
next execution. To do so, start IT Service Vision with this
PDB in update mode and type "NOTE ADMIN.PATROL.CPDUPCHK.SOURCE"
on the SAS command line. The only parameter values you need to
review and probably change are the RANGES=, SYSTEMS=, and KEEP=
settings. Review the comments therein for guidance and the
documentation on input filtering for more details.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING

If you require more information on Input Filtering please refer to the How To/Macro section from the online help for IT Service Vision.

Processing and Reducing the Data

The following process example should be run after a %CPSTART. For the purpose of this example I have included input filtering in this process run.

%cpproces(,collectr=patrol,rawdata=filename,toolnm=sasds,dupmode=discard,_rc=cpprc);

%put 'CPPROCES return code is ' &cpprc;

%CPREDUCE(,_RC=cprrc);

%put 'CPREDUCE return code is ' &cprrc;

 

In the SAS log you can expect to see the following :-

+---------------------------------------------------------------------------------------------------+
| IT Service Vision input data duplication check report                                        |
| ======================================================== |
|                                                                                                                                       |
| NOTE: All input records for new machine nightingale will be added.              |
|                                                                                                                                       |
+---------------------------------------------------------------------------------------------------+

The above message will only appear when Input Filtering is active. The message shown will depend on whether the input data is considered duplicate.

==========================================================
The following objects were not kept to be processed but
existed in the input file.

Object Name = NT_CACHE
Object Name = NT_MEMORY
Object Name = NT_NETWORK
Object Name = NT_PAGEFILE
Object Name = NT_PHYSICAL_DISKS
Object Name = NT_SECURITY
Object Name = NT_SERVER
Object Name = NT_SYSTEM

A record is not processed for the following reasons :-

1 - The ITSV table for this object was not specified
in the PROCESS macro.
2 - The ITSV table for this object is marked KEPT=N
in the PDB.
3 - The object is a new object for which a table
definition needs to be built (see GENERATE SOURCE).
==========================================================

This report is always produced when processing PATROL data. It reports the objects that were found in the rawdata that were not processed. If an object appears in this report for which you want to collect the data for, then you should add the appropriate table to the PDB. If you do not want to keep the data for an object, you can update your collection process to no longer keep the history data. If an object appears for which there is no supplied table, then one can be constructed using the GENERATE SOURCE function of the %CPDDUTL macro and an INTYPE= parameter of PATROL.

Notes on Patrol data and it's summarization into IT Service Vision

Patrol history data has several issues with regards to processing the data into a historical PDB.

  1. Different sample rates for each metric (see note in Prerequisites).
  2. DATETIME stamps of samples that are not exactly aligned.

Different Sample Rates for each Metric

Two metrics 'A' and 'B' do not have to be sampled at the same rate. 'A' may be sampled every 1 minute interval and 'B' every 5 minute interval. To combine these 2 metrics into the same observation in the PDB would be invalid as each value should eventually be weighted by the duration (depending on the interpretation type of the metric). To resolve this problem, the staging code of IT Service Vision includes a variable in each Patrol table called DURGRP. DURGRP is a string that represents the duration group that a metric belongs, e.g. in this example, 'A' which is sampled every minute is included in the observation with a DURGRP value of 60 (60 seconds) and 'B' in an observation with a DURGRP of 300 (300 seconds).

The DURGRP variable is only used at the DETAIL level in the BY list to ensure that the metrics are reduced and summarized by their respective DURATION value (assuming that they are weighted by DURATION).

At first, Patrol data in IT Service Vision may appear peculiar as there is the possibility of numerous null values appearing in each observation. The number of DURGRP's and null values will depend on the number of different sample rates applied to metrics that belong to the same table.

DATETIME Stamps of Samples that are not exactly aligned

In this example, two metrics 'A' and 'B' are both sampled at 1 minute intervals. From the example history data below you can see that the first sample occurred at x for both metrics, however the second sample the datetime stamps are out by a second with 'B' being sample later than 'A'. Obviously the first sample for each metric will be combined into a single observation as the duration and datetime stamps are the same, however this is not the case for the second sample.

nightingale/NT_CPU.CPU_0/A
    Thu Jul 23 10:00:57 1998 26.981
    Thu Jul 23 10:01:58 1998 5.35963
nightingale/NT_CPU.CPU_0/B
    Thu Jul 23 10:00:57 1998 61.0279
    Thu Jul 23 10:01:57 1998 1.20528

During the staging of the raw data, IT Service Vision detects that this second sample has related datetime values and collapses the data into one observation. The result of this is that the data in the PDB table is much less sparse, however, the DATETIME and DURATION values are going to be near approximations.


Enhanced NTSMF Support

Features

Important Note to existing NTSMF customers

If you are a new customer running IT Service Vision Release 2.2 or you are upgrading to this release and have no existing NTSMF tables defined then you can ignore this section. Also, this conversion does not apply to customers who stage their NTSMF data on MVS using the MXG tool.

If none of the above applies to you then please refer to the NTSMF Conversion documentation.

Overview

This document walks through the process of recording Demand Technology's NTSMF data into IT Service vision. The following stages will be covered along with working examples.

Prerequisites

Demand Technology's NTSMF must be installed and collecting data. It is recommended that you have installed at least version 2.1.9, although earlier releases are supported (see NTSMF Data Requirements below).

IT Service Vision Server must be installed at Release 2.2 on MVS, Unix or Windows NT Server.

NTSMF Data Requirements

Creating a PDB and adding tables

  There are three approaches you can use when starting out with NTSMF data.

%cpstart(pdb=pdb-name,root=root-location,access=write,mode=batch,_rc=cpstrc);

%put 'CPSTART Return Code is ' &cpstrc;

%cpcat;
cards4;
add table name=ntcache;
add table name=ntlgdsk;
;;;;
%cpcat(cat=work.cpddutl.add.source);
%cpddutl(entrynam=work.cpddutl.add.source);

For MVS use the following %cpstart.

%cpstart(pdb=pdb-name,
root=root-location,
disp=new,
mode=batch
_rc=cpstrc);

Once the tables have been added, dictionary characteristics (age limits, variables kept status) can be modified either using the interactive interface or the %CPDDUTL macro.

Processing and Reducing Data into the PDB

The NTSMF data can be processed on any platform, MVS, UNIX or Windows NT Server, irrespective of which platform it originated. Once that data is moved to the appropriate platform the processing is identical.

Transferring the Data

For the Enhanced NTSMF support that is provided in IT Service Vision Release 2.2 it is recommended that each NTSMF log file is maintained as a separate log file, that is, we do not recommend that they are concatenated together.

Transfer the file to the appropriate platform so that it retains it's text format.

Unix and PC: Place all the NTSMF log files in a single directory which will be pointed to by the %cpproces macro.

MVS: Place each log file in its own PDS member with the following DCB attributes DSORG=PO,RECFM=VB,LRECL=32756,BLKSIZE=32760. By specifying the PDS name in the %cpproces macro, each member will be picked up and processed.

Input Filtering

If you decide to use Input Filtering, we recommend that you do the following before running your first %CPPROCES :-

  1. Bring up your NTSMF PDB interactively.
  2. Copy pgmlib.ntsmf.cpdupchk.source to admin.ntsmf.cpdupchk.source.
    To do this, submit the following code from your program editor.

    proc catalog cat=pgmlib.ntsmf;
    copy out=admin.ntsmf;
    select cpdupchk.source;
    quit;

  3. Review and update if necessary the following parameters for the %CPDUPCHK macro invocation contained in admin.ntsmf.cpdupchk.source. To do this type note admin.ntsmf.cpdupchk.source on the command line (or from the command box) make the necessary updates and SAVE and END out of the notepad window.

If you do not do the above, the first run of %cpproces with Input Filtering active will copy the default %CPDUPCHK invocation into the admin library automatically, and you will receive the following warning message recommending that you review the %CPDUPCHK parameter values.

WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
WARNING: DO NOT OVERLOOK THIS IMPORTANT WARNING - IT WILL NOT APPEAR AGAIN.
WARNING: A sample invocation of the %CPDUPCHK macro has been copied to
your ADMIN library. You should review its contents before the
next execution. To do so, start IT Service Vision with this
PDB in update mode and type "NOTE ADMIN.NTSMF.CPDUPCHK.SOURCE"
on the SAS command line. The only parameter values you need to
review and probably change are the RANGES=, SYSTEMS=, and KEEP=
settings. Review the comments therein for guidance and the
documentation on input filtering for more details.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING

If you require more information on Input Filtering please refer to the How To/Macro section from the online help for IT Service Vision.

Processing and Reducing the Data

The following process example should be run a %CPSTART. For the purpose of this example I have included input filtering in this process run. The rawdata parameter on the %CPPROCES macro should point to a directory (on Unix and PC) which contains the NTSMF logs to be processed and a PDS (on MVS).

Note: The new revised NTSMF support uses a collector value of NTSMF and a toolnm of SASDS.

%cpproces(,collectr=ntsmf,rawdata=filename,toolnm=sasds,dupmode=discard,_rc=cpprc);

%put 'CPPROCES return code is ' &cpprc;

%CPREDUCE(,_RC=cprrc);

%put 'CPREDUCE return code is ' &cprrc;

 

In the SAS log you can expect to see the following :-

+-----------------------------------------------------------------------------------------+
| IT Service Vision input data duplication check report                            |
| =================================================== |
|                                                                                                                          |
| NOTE: All input records for new machine nightingale will be added. |
|                                                                                                                          |
+-----------------------------------------------------------------------------------------+

The above message will only appear when Input Filtering is active. The message shown will depend on whether the input data is considered duplicate.

==========================================================
The following objects were not kept to be processed but
existed in the input file.

Object Name = FTP Server
Object Name = MSExchangeWEB
Object Name = Memory
Object Name = Paging File
Object Name = PhysicalDisk
Object Name = Process
Object Name = Server
Object Name = Server Work Queues
Object Name = System
Object Name = WINS Server

A record is not processed for the following reasons :-

1 - The ITSV table for this object was not specified
in the PROCESS macro.
2 - The ITSV table for this object is marked KEPT=N
in the PDB.
3 - The object is a new object for which a table
definition needs to be built (see GENERATE SOURCE).
==========================================================

This report is always produced when processing NTSMF data. It reports the objects that were found in the rawdata that were not processed. If an object appears in this report for which you want to collect the data for, then you should add the appropriate table to the PDB. If you do not want to keep the data for an object, you can update your collection process to no longer keep the history data. If an object appears for which there is no supplied table, then one can be constructed using the GENERATE SOURCE function of the %CPDDUTL macro and an INTYPE= parameter of NTSMF.


Using Generate Source to construct Patrol and NTSMF tables

Collectors such as Demand Technology's NTSMF and BMC Patrol have the potential for producing information from a large number of data sources. Although IT Service Vision will typically supply table and variable definitions for the more popular data sources, it is not practical to supply definitions for them all. To address this issue, additional functionality has been added to the GENERATE SOURCE ddutl control statement to assist in creating table and variable definitions.

The process for creating the table and variable definitions is the same for both NTSMF and PATROL, it is only the input that is different.

Overview

  1. Collect the appropriate input data to pass to the GENERATE SOURCE ddutl statements

    For NTSMF the input to GENERATE SOURCE is the NTSMF discovery record for the object that you are creating the table and variable definitions.
    For PATROL the input is the data file that you would normally feed to the %CPPROCES macro.
  2. Run the %CPDDUTL macro with the appropriate statements and create a 'first cut' of the table and variable definitions in a SAS catalog entry or a text file.
  3. Review the generated table and variable definitions.
  4. Run the %CPDDUTL macro with the reviewed table and variable definitions, adding the new table to the PDB.
  5. Run %CPPROCES as per any other NTSMF table.

Assume that you are a customer with NTSMF and a new object has appeared in your NTSMF log that you want to include in your PDB.

  1. Collect the appropriate input data to pass to the GENERATE SOURCE ddutl statements.

    The information required to construct table and variable definitions is the meta data information. The source of this information will vary from collector to collector.

    • NTSMF

      The meta data for NTSMF tables comes from the NTSMF Discovery records which are collected as part of your NTSMF log when the Discovery record option is switched on. (As part of   IT Service Vision 2.2 we require that all logs contain NTSMF Discovery records). The following is an example of an NTSMF Discovery record.

      3,0,5,CARYNT,nightingale,1998,4,24,17,25,8,375,0,4,LogicalDisk,0,24,28,2,PhysicalDisk,LogicalDisk,% Free Space,,Free Megabytes,Current Disk Queue Length,% Disk Time,Avg. Disk Queue Length,% Disk Read Time,Avg. Disk Read Queue Length,% Disk Write Time,Avg. Disk Write Queue Length,Avg. Disk sec/Transfer,,Avg. Disk sec/Read,,Avg. Disk sec/Write,,Disk Transfers/sec,Disk Reads/sec,Disk Writes/sec,Disk Bytes/sec,Disk Read Bytes/sec,Disk Write Bytes/sec,Avg. Disk Bytes/Transfer,,Avg. Disk Bytes/Read,,Avg. Disk Bytes/Write,,

      Although you can pass a complete NTSMF log file as input to the GENERATE SOURCE ddutl statement, it will create definitions for each NTSMF Discovery record in that file. It is recommend that you cut and paste just the NTSMF Discovery record(s) that you are interested in into a separate file and use this as the input.

    • PATROL

      The meta data for PATROL tables and variables come from the same Patrol History data that is passed to the %CPPROCES macro. The following example is output from the dump_hist command (although data from the Patrol History Loader KM will also work). In the small example shown below a single table  for NT_CPU would be defined with 2 metrics. See your BMC Patrol documentation for more information on extracting data using the dump_hist command or through the Patrol History Loader KM.

      nightingale/NT_CPU.CPU_0/CPUprcrUserTimePercent
          Thu Jul 23 10:00:57 1998 26.981
          Thu Jul 23 10:01:58 1998 5.35963
          Thu Jul 23 10:02:58 1998 0.598205
          Thu Jul 23 10:03:58 1998 0.333915
      nightingale/NT_CPU.CPU_0/CPUprcrPrivTimePercent
          Thu Jul 23 10:00:57 1998 61.0279
          Thu Jul 23 10:01:58 1998 1.20528
          Thu Jul 23 10:02:58 1998 1.56053
          Thu Jul 23 10:03:58 1998 1.05312


  2. Use GENERATE SOURCE to create DDUTL control statements for this object (table). See the Macro Reference guide for more information on the GENERATE SOURCE control statement and syntax.

    The following code will read the file from INFILE= and generate table and variable definitions which it will save in a SAS catalog entry (this could be a text file if the FILENAME= parameter was used).

    %cpcat;
    cards4;
    generate source infile='location.of.file.with.metadata'
                             intype=NTSMF | PATROL
                             entryname='work.ddutl.gensrc.source';
    ;;;;
    %cpcat(cat=work.cpddutl.gen.source);

    %cpddutl(entrynam=work.cpddutl.gen.source);


  3. Review the generated source, particularly the interpretation types and statistics for each variable. For information on the interpetation types, see the 'How To/Macro Reference Guide'.

    GENERATE SOURCE scans the meta data it is fed and will attempt to match a valid interpretation type to a variable based on its label or description. If its interpretation type cannot be determined it is assigned the default interpretation type and statistics. If you feel that GENERATE SOURCE has not assigned the correct value then change it appropriately.

    Typically, the only other parameters that you are likely to need to update are the DESCRIPTIONS and LABEL parameters. Do NOT change the EXTNM parameter as this is used to map the external variable name to the variable name used by IT Service Vision. We also recommend that you do not alter the CLASSVARS statements in particular the DURGRP variable should not be removed. The DURGRP variable is used at the summary levels to ensure that each variable is weighted using the correct duration value.

  4. Use the %CPDDUTL macro to add the table to the PDB using the reviewed ddutl statements..

    %cpddutl(entrynam=work.cpddutl.gensrc.source);

  5. Use %CPPROCES to process the raw data into the PDB.

    %cpproces(,collectr=NTSMF | PATROL
                     ,rawdata=location.of .data
                      ,toolnm=sasds);

HP Measureware Updates

The following HP MeasureWare tables have been updated with new variables :-

APPAPCT - APP ACTIVE PCT
APPATME - APP ACTIVE TIME
APPGICT - APP GUI INPUT COUNT
APPGIRT - APP GUI INPUT RATE
APPGKCT - APP GUI KEYBOARD COUNT
APPGKDL - APP GUI KEYBOARD DELAY
APPGKYR - APP GUI KEYBOARD RATE
APPGMCT - APP GUI MOUSE COUNT
APPGMDL - APP GUI MOUSE DELAY
APPGMRT - APP GUI MOUSE RATE

BYDASTM - BYDSK AVG SERVICE TIME

GLBALTH - GBL ALIVE THREAD
GLBDKFR - GBL DISK SPACE FREE
GLBDKSP - GBL DISK SPACE
GLBFILK - TBL FILE LOCK UTIL
GLBFITB - TBL FILE TABLE UTIL
GLBGUCT - GBL GUI INPUT COUNT
GLBGUDL - GBL GUI INPUT DELAY
GLBGUDR - GBL GUI DELAY INDEX
GLBGURT - GBL GUI INPUT RATE
GLBKBCT - GBL GUI KEYBOARD COUNT
GLBKBRT - GBL GUI KEYBOARD RATE
GLBMCPT - GBL MEM COMMIT PCT
GLBMDCR - GBL MEM DISCARD RATE
GLBMDSC - GBL MEM DISCARD
GLBMLDI - GBL MEM LOAD INDEX
GLBMPGI - GBL MEM PAGEIN
GLBMPIR - GBL MEM PAGEIN RATE
GLBMPSR - GBL MEM PG SCAN RATE
GLBMSCT - GBL GUI MOUSE COUNT
GLBMSRT - GBL GUI MOUSE RATE
GLBMSUT - GBL MEM SYS UTIL
GLBNIEP - GBL NET IN ERROR PCT
GLBNOEP - GBL NET OUT ERROR PCT
GLBNOTQ - GBL NET OUTQUEUE
GLBNTBR - GBL NET BYTE RATE
GLBRBRT - GBL RDR BYTE RATE
GLBRRRT - GBL RDR REQUEST RATE
GLBSPMN - GBL PARTITION SPACE MIN
GLBSYUP - GBL SYSTEM UPTIME HOURS
GLBTTOC - GBL TT OVERFLOW COUNT
GLBWBCR - GBL WEB CONNECTION RATE
GLBWBLF - GBL WEB LOGON FAILURES
GLBWCHP - GBL WEB CACHE HIT PCT
GLBWCRR - GBL WEB CGI REQUEST RATE
GLBWFRR - GBL WEB FILES RECEIVED RATE
GLBWFSR - GBL WEB FILES SENT RATE
GLBWGRR - GBL WEB GET REQUEST RATE
GLBWHRR - GBL WEB HEAD REQUEST RATE
GLBWIRR - GBL WEB ISAPI REQUEST RATE
GLBWNFE - GBL WEB NOT FOUND ERRORS
GLBWORR - GBL WEB OTHER REQUEST RATE
GLBWPRR - GBL WEB POST REQUEST RATE
GLBWRBR - GBL WEB READ BYTE RATE
GLBWWBR - GBL WEB WRITE BYTE RATE
TBLBFCU - TBL BUFFER CACHE USED
TBLINCU - TBL INODE CACHE USED
TBLMSTU - TBL MSG TABLE UTIL
TBLPRTU - TBL PROC TABLE UTIL
TBLSHTU - TBL SHMEM TABLE UTIL
TBLSMTU - TBL SEM TABLE UTIL

PROTHCT - PROC THREAD COUNT

TTAPPNM - TT APP NAME
TTAPPTN - TT APP TRAP NAME
TTCLADD - TT CLIENT ADDRESS
TTCLAFT - TT CLIENT ADDRESS FORMAT
TTCLTID - TT CLIENT TRAN ID
TTCTTPT - TT CPU TOTAL TIME PER TRAN
TTDLIPT - TT DISK LOGL IO PER TRAN
TTDPIPT - TT DISK PHYS IO PER TRAN
TTFAILD - TT FAILED
TTINFO - TT INFO
TTTRNID - TT TRAN ID
TTUMAV2 - TT USER MEASUREMENT AVG 2
TTUMAVG - TT USER MEASUREMENT AVG
TTUMMAX - TT USER MEASUREMENT MAX
TTUMMIN - TT USER MEASUREMENT MIN
TTUMMN2 - TT USER MEASUREMENT MIN 2
TTUMMX2 - TT USER MEASUREMENT MAX 2
TTUMNM2 - TT USER MEASUREMENT NAME 2
TTUMNME - TT USER MEASUREMENT NAME
TTUNAME - TT UNAME

If you decide that you want to add these metrics to your existing PCS* tables then you have to perform a MAINTAIN TABLE, this will migrate the variable definitions from the supplied data dictionary to your PDB's dictionary. See Macro reference for more details on the MAINTAIN TABLE functionality.


NTSMF Updates

Although you can add these metrics to your IT Service Vision 2.1 NTSMF tables, they will only be populated if you have run a conversion to use the Enhanced NTSMF support and the software that records these metrics is at the appropriate level.

If you decide that you want to add the following  metrics to your existing NT* tables then you have to perform a MAINTAIN TABLE, this will migrate the definitions from the supplied data dictionary to your PDB's dictionary. See Macro reference for more details on the MAINTAIN TABLE functionality.

 
-------------------------------------- Table name=NTCNCTN ---------------------------------------

             Description                                                    Variable Name

             MSExchangeMTA Connections: Connector Index                     CNCINDX
             MSExchangeMTA Connections: Cumulative Inbound Associations     IASCTN0
             MSExchangeMTA Connections: Rejected Inbound Associations       IASCTN1
             MSExchangeMTA Connections: Current Inbound Associations        IASCTNS
             MSExchangeMTA Connections: Inbound Bytes Total                 INBBTTL
             MSExchangeMTA Connections: Inbound Messages Total              INBMTTL
             MSExchangeMTA Connections: Inbound Reject Reason               INBRRSN
             MSExchangeMTA Connections: Inbound Rejected Total              INRJTTL
             MSExchangeMTA Connections: Last Inbound Association            LIASCTN
             MSExchangeMTA Connections: Last Outbound Association           LOASCTN
             MSExchangeMTA Connections: Next Association Retry              NASCRTR
             MSExchangeMTA Connections: Cumulative Outbound Associations    OASCTN0
             MSExchangeMTA Connections: Failed Outbound Associations        OASCTN1
             MSExchangeMTA Connections: Current Outbound Associations       OASCTNS
             MSExchangeMTA Connections: Oldest Message Queued               OLDSMQD
             MSExchangeMTA Connections: Outbound Bytes Total                OTBBTTL
             MSExchangeMTA Connections: Outbound Failure Reason             OTBFRSN
             MSExchangeMTA Connections: Outbound Messages Total             OTBMTTL
             MSExchangeMTA Connections: Queued Bytes                        QDBYTES
             MSExchangeMTA Connections: Total Recipients Queued             TRCPNQD
             MSExchangeMTA Connections: Total Recipients Inbound            TRINBND
             MSExchangeMTA Connections: Total Recipients Outbound           TROTBND


-------------------------------------- Table name=NTMSEIS ---------------------------------------

                Description                                                 Variable Name

                MSExchangeIS: IMAP Commands Issued Rate                     MAPCIRT
                MSExchangeIS: IMAP Commands Issued                          MAPCISD
                MSExchangeIS: IMAP Messages Sent                            MAPMSNT
                MSExchangeIS: IMAP Message Send Rate                        MAPMSRT
                MSExchangeIS: Newsfeed Inbound Rejected Messages            NIRMSGS
                MSExchangeIS: NNTP Messages Read                            NNTPMRD
                MSExchangeIS: Newsfeed Outbound Rejected Messages           NORMSGS
                MSExchangeIS: NNTP Commands Issued Rate                     NTPCIRT
                MSExchangeIS: NNTP Commands Issued                          NTPCISD
                MSExchangeIS: NNTP Failed Posts Rate                        NTPFPRT
                MSExchangeIS: NNTP Messages Posted Rate                     NTPMPRT
                MSExchangeIS: NNTP Messages Read Rate                       NTPMRRT
                MSExchangeIS: Newsfeed Inbound Rejected Messages Rate       NWIRMRT
                MSExchangeIS: Newsfeed Messages Received                    NWMRCVD
                MSExchangeIS: Newsfeed Bytes Sent                           NWSBSNT
                MSExchangeIS: Newsfeed Bytes Sent/sec                       NWSBSSC
                MSExchangeIS: Newsfeed Messages Received Rate               NWSMRRT
                MSExchangeIS: Newsfeed Messages Sent                        NWSMSNT
                MSExchangeIS: Newsfeed Messages Sent/sec                    NWSMSSC
                MSExchangeIS: NNTP Current Outbound Connections             OCNCTN0
                MSExchangeIS: NNTP Outbound Connections                     OCNCTNS
                MSExchangeIS: POP3 Commands Issued Rate                     POPCIRT
                MSExchangeIS: POP3 Commands Issued                          POPCISD
                MSExchangeIS: POP3 Messages Sent                            POPMSNT
                MSExchangeIS: POP3 Messages Send Rate                       POPMSRT
                MSExchangeIS: NNTP Failed Posts                             TPFPSTS
                MSExchangeIS: NNTP Messages Posted                          TPMPSTD
                MSExchangeIS: Number of article index table rows expired    TREXPRD


-------------------------------------- Table name=NTPRTCL ---------------------------------------

                                      Description                       Variable Name

                 MSExchange Internet Protocols: Incoming Queue Size     INCMQSZ
                 MSExchange Internet Protocols: Outstanding Commands    OTCMNDS
                 MSExchange Internet Protocols: Outgoing Queue Size     OTGNQSZ
                 MSExchange Internet Protocols: Total Commands          TTCMNDS



-------------------------------------- Table name=NTSEIMC ---------------------------------------

                                      Description                   Variable Name

                     MSExchangeIMC: Total Failed Conversions        CNVRSN0
                     MSExchangeIMC: Total Successful Conversions    CNVRSNS
                     MSExchangeIMC: Total Inbound Recipients        IRCPNTS
                     MSExchangeIMC: Total Outbound Recipients       ORCPNTS
                     MSExchangeIMC: Total Loops Detected            TLDTCTD
                     MSExchangeIMC: Total Recipients Queued         TRCPNQD
                     MSExchangeIMC: Total Kilobytes Queued          TTKLBQD
                     MSExchangeIMC: Total Messages Queued           TTMSGQD


-------------------------------------- Table name=NTSEMTA ---------------------------------------

                                      Description                   Variable Name

                     MSExchangeMTA: Total Failed Conversions        CNVRSN0
                     MSExchangeMTA: Total Successful Conversions    CNVRSNS
                     MSExchangeMTA: Deferred Delivery Msgs          DFDMSGS
                     MSExchangeMTA: Inbound Bytes Total             INBBTTL
                     MSExchangeMTA: Inbound Messages Total          INBMTTL
                     MSExchangeMTA: Outbound Bytes Total            OTBBTTL
                     MSExchangeMTA: Outbound Messages Total         OTBMTTL
                     MSExchangeMTA: Total Loops Detected            TLDTCTD
                     MSExchangeMTA: Total Recipients Queued         TRCPNQD
                     MSExchangeMTA: Total Recipients Inbound        TRINBND
                     MSExchangeMTA: Total Recipients Outbound       TROTBND
                     MSExchangeMTA: Work Queue Bytes                WRKQBTS


-------------------------------------- Table name=NTSPBLC ---------------------------------------

                                      Description                               Variable Name

         MSExchangeIS Public: Total Count of Recoverable Items                  CORITMS
         MSExchangeIS Public: Number of messages expired from public folders    FPFLDRS
         MSExchangeIS Public: Replication Receive Queue Size                    RPLRQSZ
         MSExchangeIS Public: Total Size of Recoverable Items                   SORITMS


-------------------------------------- Table name=NTSPRVT ---------------------------------------

                                       Description                        Variable Name

                MSExchangeIS Private: Total Count of Recoverable Items    CORITMS
                MSExchangeIS Private: Local deliveries                    LCDLVRS
                MSExchangeIS Private: Local delivery rate                 LCDLVRT
                MSExchangeIS Private: Total Size of Recoverable Items     SORITMS

-------------------------------------- Table name=NTSSRVC ---------------------------------------

            Description                                                       Variable Name

            Web Proxy Server Service: Array Bytes Received/sec                ABTRCSC
            Web Proxy Server Service: Array Bytes Sent/sec                    ABTSNSC
            Web Proxy Server Service: Array Bytes Total/sec                   ABTTTSC
            Web Proxy Server Service: Current Average Milliseconds/request    CAMRQST
            Web Proxy Server Service: Failing Requests/sec                    FLRQSSC
            Web Proxy Server Service: Requests/sec                            RQSTSSC
            Web Proxy Server Service: Reverse Bytes Received/sec              RVBRCSC
            Web Proxy Server Service: Reverse Bytes Sent/sec                  RVRBSSC
            Web Proxy Server Service: Reverse Bytes Total/sec                 RVRBTSC
            Web Proxy Server Service: Socks Client Bytes Received/sec         SCCBRSC
            Web Proxy Server Service: Socks Client Bytes Sent/sec             SCCBSSC
            Web Proxy Server Service: Socks Client Bytes Total/sec            SCCBTSC
            Web Proxy Server Service: Socks sessions                          SCKSSNS
            Web Proxy Server Service: Total Array Fetches                     TAFTCHS
            Web Proxy Server Service: Total Failed Socks Sessions             TFSSSNS
            Web Proxy Server Service: Total Reverse Fetches                   TRFTCHS
            Web Proxy Server Service: Total Socks Sessions                    TSCSSNS
            Web Proxy Server Service: Total Successful Socks Sessions         TSSSSNS
            Web Proxy Server Service: Total Upstream Fetches                  TUFTCHS
            Web Proxy Server Service: Upstream Bytes Received/sec             UPSBRSC
            Web Proxy Server Service: Upstream Bytes Sent/sec                 UPSBSSC
            Web Proxy Server Service: Upstream Bytes Total/sec                UPSBTSC