Contents:
Preparing Measureware and NTSMF PDBs for use with the QuickStart Reports
IT Service Vision 2.2 introduced the QuickStart Wizard to make it easier for customers to setup a new PDB and produce reports on it. The Wizard does this by creating a PDB with the batch jobs for updating and reporting on it based on customer responses to questions about their data. Customers with a pre-existing PDB can also use the QuickStart reports on the data in the PDB if they update the PDB as described here.
To prepare for using the QuickStart reports on an existing PDB, perform the following steps:
Shift 1, WEEKDAY: Mon-Fri, 8am-5pm
Shift 2, WEEKNIGHT: Mon-Fri, 5pm-8am
Shift 3, WEEKEND: Sat-Sun
If your shift definitions are substantively different from these, some of the QuickStart reports may not appear as you expect. If you do not want to alter your existing shift definitions, you will have to update the shift references in those QuickStart reports that depend on them. Use the Manage Reports tab in the ITSV user interface to do this.
To do just the standard updates to allow QuickStart reports to run on your existing PDB, skip on to Step 4. To enable the optional performance improvement changes to a table in an existing PDB, review the contents of the catalog entry named PGMLIB.JSWIZCAT.table.SOURCE, where "table" is the name of the table in the existing PDB that you want to run QuickStart reports on. The more invasive performance changes have been commented out of that entry. Search for the string "performance" in the entry and follow the instructions there to enable the change. Save the edited version of the QS update entry in a catalog entry of your creation named PLAYPEN.JSWIZCAT.table.SOURCE. The %QSREADY macro you run in the next step will pick up the edited version of this catalog entry from PLAYPEN.JSWIZCAT instead of PGMLIB.JSWIZCAT.
* Allocate the PDB with write access and load the QSREADY macro; %CPSTART( pdb=/my/pdb, mode=batch, access=write); filename qsready catalog "PGMLIB.JSWIZCAT.QSREADY.SOURCE"; %include QSREADY; * Allocate the playpen library containing your personalized JSWIZCAT entries; * (if you created an entry in this catalog in Step 3); libname playpen "/my/playpen"; * Report on what updates are needed in your PDB; %QSREADY ( ); * Update selected tables in the PDB or... ; %QSREADY ( tablename1 tablename2 ... tablenameN ); * ... or update all tables in the PDB or...; %QSREADY ( _ALL_ ); * ... Report on what updates are needed; %QSREADY ( );
This will update your PDB and make it ready for QuickStart reports.
If you are a new customer running IT Service Vision Release 2.2 or you are upgrading to this release and have no existing NTSMF tables defined then you can ignore this section. Also, this conversion does not apply to customers who stage their NTSMF data on MVS using the MXG tool.
It is recommended that you undertake the necessary changes to use the Enhanced NTSMF support. The existing IT Service Vision Release 2.1 support was static in that it expected the NTSMF object record formats to remain constant with new counters being added to the end of each record. Unfortunately this has not been the case, depending on the release of the software populating the NTSMF data record for a particular object, counters have been moved, deleted , inserted and/or renamed. This is why Enhanced NTSMF support requires that your NTSMF logs contain NTSMF Discovery records.
If you have been using the NTSMF support provided in IT Service Vision 2.1 or earlier, then you will have to undertake a conversion to take advantage of the above new features. The existing NTSMF support is included in IT Service Vision Release 2.2 so your existing processes and reports will continue to work without the conversion.. Please see the conversion checklist for more details.
If you currently process your NTSMF data from a file that is a concatenation of several separate log files then it is recommended that you now maintain all your NTSMF log files as individual files for the following reasons :-This document walks through the process of converting IT Service Vision V2.1 NTSMF tables to V2.2.
NTASPRT - Windows NT Ras Port
NTASTTL - Windows NT RAS Total
NTBFCTR - Windows NT Benchmark Factory
NTCMNGR - Windows NT Caching Manager
NTCONF - NTSMF Configuration table
NTDTBS - Windows NT Database
NTFLTRN - Windows NT Packet Filtering
NTFTPSV - Windows NT FTP Service
NTIMAGE - Windows NT Image
NTMSEES - Windows NT MSExchangeES
NTSSRVR - Windows NT WINS Server
NTTDTLS - Windows NT Thread Details
NTWSRVC - Windows NT Web Service
NTRADSR - Windows NT RADIUS Server
NTARTMS - Windows NT SNA 3270 Response times
Q1. Do you have any NTSMF tables defined in my PDB (with a COLLECTOR value of WINNT) ?
Yes - You need to convert, go to question 2.
No - No need to convert this PDB as no NTSMF tables have been defined..
Q2. If you have been or are about to update any software on your NT Servers, it is possible that they will produce different NTSMF records formats than previous software versions. These new format NTSMF records may not be supported by the ITSV2.1 staging code. Have you already or are you likely to upgrade any software on your NT servers ?
Yes - You need to convert, go to question 3.
No - No need to convert this PDB.
Q3. Do you want to keep the existing data in my NTSMF tables ?
Yes - You need to run the conversion.
No - delete all your NTSMF tables (with a collector value of WINNT) and add them again
using IT Service Vision 2.2 to obtain the new definitions.
This conversion process will have to be run for each PDB that contains NTSMF tables that were added using IT Service Vision 2.1 or earlier. The actual conversion process can be run interactively or in batch.
- Install IT Service Vision 2.2
You will still be able to process your existing NTSMF tables as usual under this version. The conversion process requires that you are at the 2.2 release. IT Service Vision Release 2.2 contains both the original and the enhanced staging code. One of the updates made to the original staging code is the ability to handle NTSMF discovery records.
- Before moving onto the next stage ensure all NTSMF logs are recording NTSMF Discovery records.
For details on activating NTSMF Discovery records refer to your NTSMF documentation.
- Backup Your PDB prior to conversion
The conversion process will potentially be making a large number of updates to your PDB's data dictionary so we recommend that a full PDB backup is taken.
Run conversion in UPDATE=N mode.
Running in this mode performs no updates but does produce a report of changes and highlights any potential conversion issues prior to the dictionary being updated. At this point you do not need write access to the PDB. If you already have IT Service Vision active then there is no need to run the %cpstart macro again.
%cpstart(pdb=pdb-name,root=root-location,access=readonly,mode=batch,_rc=cpstrc);
%put 'CPSTART Return code is ' &cpstrc;
filename tmp catalog 'pgmlib.ntsmf.convert.source';
%include tmp;
%cpntcnv(update=N);
filename tmp clear;
Once the above has run, a report is produced in the SAS log reporting any ERRORS or WARNINGS in the conversion. Please review this report.
- Run conversion in UPDATE=Y mode.
This process will require that the PDB be allocated with write access and will run longer than before due to the updates being performed.
%cpstart(pdb=pdb-name,root=root-location,access=write,mode=batch,_rc=cpstrc);
%put 'CPSTART Return code is ' &cpstrc;
filename tmp catalog 'pgmlib.ntsmf.convert.source';
%include tmp;
%cpntcnv(update=Y);
filename tmp clear;
- Update existing CPPROCES code.
If you have defined batch files for processing your NTSMF data you will have to update the COLLECTR= and TOOLNM= parameters to ensure that the enhanced processing code is used. These updates must be performed as the old processing code will not work with the new definitions.
In addition to these parameter changes, we recommend that you update the RAWDATA parameter to point to a directory and not use wildcards. IT Service Vision will automatically process all the NTSMF logs in that directory. For example :-
Before Conversion :- After Conversion :-
%CPPROCES(,COLLECTR=WINNT %CPPROCES(,COLLECTR=NTSMF
,RAWDATA=E:\NTSMF\CURRENT\*.SMF ,RAWDATA=E:\NTSMF\CURRENT\
,TOOLNM=NTSMF); ,TOOLNM=SASDS);
Note: After conversion you will be able to add the dupmode= parameter to your process macro.
- Update existing CPPROCES code to use input filtering (optional).
If you decide to use Input Filtering, we recommend that you do the following before running your first %CPPROCES :-
- Bring up your NTSMF PDB interactively.
- Copy pgmlib.patrol.cpdupchk.source to admin.ntsmf.cpdupchk.source.
To do this, submit the following code from your program editor.
proc catalog cat=pgmlib.ntsmf;
copy out=admin.ntsmf;
select cpdupchk.source;
quit;- Review and update if necessary the following parameters for the %CPDUPCHK macro invocation contained in admin.ntsmf.cpdupchk.source. To do this type note admin.ntsmf.cpdupchk.source on the command line (or from the command box) make the necessary updates and SAVE and END out of the notepad window.
- INT=interval represents the maximum interval allowed between the timestamps on any two consecutive data records from the same system. If the interval between the timestamp values exceeds the value of this parameter a new time range is created. Default is 00:18 - 18 minutes.
- SYSTEMS-number of systems represents an estimate of the maximum number of systems for which the data file will contain data. Default is 50.
- RANGES=number of ranges represents the maximum number of interval ranges that can occur during this execution of %CxPROCES. A new range is created when the difference between the datetime stamps of two consecutive records exceeds the value of the INT= parameter. This break id referred to as a gap in the data. Default is 10.
- KEEP=number of weeks represents the maximum number of weeks for which you want to retain control data. Control data is aged out or removed when the last datetime value in a range exceeds the value of this parameter. Default is 52.
If you do not do the above, the first run of %cpproces with Input Filtering active will copy the default %CPDUPCHK invocation into the admin library automatically, and you will receive the following warning message recommending that you review the %CPDUPCHK parameter values.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
WARNING: DO NOT OVERLOOK THIS IMPORTANT WARNING - IT WILL NOT APPEAR AGAIN.
WARNING: A sample invocation of the %CPDUPCHK macro has been copied to
your ADMIN library. You should review its contents before the
next execution. To do so, start IT Service Vision with this
PDB in update mode and type "NOTE ADMIN.NTSMF.CPDUPCHK.SOURCE"
on the SAS command line. The only parameter values you need to
review and probably change are the RANGES=, SYSTEMS=, and KEEP=
settings. Review the comments therein for guidance and the
documentation on input filtering for more details.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
If you require more information on Input Filtering please refer to the How To/Macro section from the online help for IT Service Vision.
You may notice that variables such as INSTANC, PARENT and SYSTEM have been renamed.
INSTANC and PARENT have been renamed to be more informative. For example, for the
Logical Disk object (NTLGDSK) the following changes have been made :-
INSTANC
=> LGCLDSK
PARENT
=>
PHSCDSK
SYSTEM has been renamed to MACHINE to ensure that it matches all the other ITSV tables on all other platforms.
There is no need to update your existing reports as the old variable names will still exist in the tables as formula variables.
Below is a list of all the variable renamed by table name.
TABLENM TABLE LABEL OLD NAME NEW NAME NTCNCT0 NBT Connection INSTANC BTCNCTN NTCNCTN MSExchangeMTA Connections INSTANC CNNCTNS NTINTRF Network Interface INSTANC NINTRFC NTIRSRC NetBEUI PARENT NETBEUI NetBEUI Resource INSTANC EUIRSRC NTLGDSK PhysicalDisk PARENT PHSCDSK LogicalDisk INSTANC LGCLDSK NTMSEDB MSExchangeDB INSTANC MSEXCDB NTNBEUI NetBEUI INSTANC NETBEUI NTNBIOS NWLink NetBIOS INSTANC WLNBIOS NTPASPC Process Address Space INSTANC PRADSPC NTPCMTA MSExchangePCMTA INSTANC MSPCMTA NTPGNFL Paging File INSTANC PGNGFL NTPHDSK PhysicalDisk INSTANC PHSCDSK NTPNTM Pentium INSTANC PENTIUM NTPRCS Process INSTANC PROCESS NTPRCSR Processor INSTANC PRCSR NTPRTCL MSExchange Internet Protocols INSTANC IPRTCLS NTQLSLG SQLServer-Log INSTANC SQLSRLG NTSGMNT Network Segment INSTANC NTSGMNT NTSRWQS Server Work Queues INSTANC SRVWRQS NTSUSRS SQLServer-Users INSTANC QLSUSRS NTTHRD Process PARENT PROCESS Thread INSTANC THREAD NTWLIPX NWLink IPX INSTANC NWLNIPX NTWLSPX NWLink SPX INSTANC NWLNSPX
The following is a sample of the output from converting the Logical Disk table from 2.1 to 2.2 format ( I have not shown all updates to keep the list short).
A similar report is produced for each NTSMF table that is converted. If no dictionary attributes change for a variable it is still listed in the report.
The column 'Dictionary Attribute' describes the meta data that is being changed. The remaining three columns list the existing value in the DICTLIB, the new value in IT Service Vision Replace 2.2 and the value that will/has been applied to the dictionary. If you have made any updates to the NTSMF tables prior to conversion, then these changes will be retained in the converted dictionary.
The fields that identify the different type of stats (default, day, week, month, year) are reported for completeness only and do not need to be understood. All that we are concerned with is the the string under UPDATED VALUE is a superset of the strings in EXISTING VALUE and VALUE IN VERSION 2.2.
The last line of the report details whether SYSTEM, INSTANC and/or PARENT variables have been renamed.
Report for Table name : NTLGDSK ------------------------------- Variable : AVDBWRT ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Format BEST12.2 BEST12.2 Subject N/A N/A Variable : DATETIME ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Variable : DOMAIN ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Label Domain Name Domain name Domain name Description Domain Name LogicalDisk: Domain name LogicalDisk: Domain name Length 32 200 200 Format $CHAR. Subject N/A N/A Variable : DURATION ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Variable : FRMGBTS ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Format BEST12.2 BEST12.2 Subject N/A N/A Variable : HOUR ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Label HOUR Hour of day Hour of day Description Hour is a default variable Hour_of_day Hour_of_day Length 3 4 4 Format BEST12. 2. 2. Subject N/A N/A Variable : INSTANC ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE External Name Instance LOGICALDISK LOGICALDISK Label Instance LogicalDisk LogicalDisk Description Object Instance LogicalDisk: LogicalDisk LogicalDisk: LogicalDisk Length 40 200 200 Subject N/A N/A Variable : LSTPDATE ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Variable : PARENT ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE External Name Parent PHYSICALDISK PHYSICALDISK Label Parent PhysicalDisk PhysicalDisk Description Object Parent LogicalDisk: PhysicalDisk LogicalDisk: PhysicalDisk Length 40 200 200 Subject N/A N/A Variable : WQLNGTH ------------------- DICTIONARY ATTRIBUTE Existing Value VALUE IN VERSION 2.2 UPDATED VALUE Format BEST12.2 BEST12.2 Subject N/A N/A Variables PARENT and INSTANC are about to be renamed to PHSCDSK and LGCLDSK Variable SYSTEM is about to be renamed to MACHINE. Creating formula variables for SYSTEM, INSTANC and PARENT as necessary to ensure that existing reports will work correctly.
Variables interpretation types are used to determine how the data is summarized in reduction. Upon review of the existing variable interpretation types many have been updated to more accurately reflect the variable. For the majority of changes, there is little impact in changing the interpretation type. However, the following message will appear when a variable's interpretation type has been changed, and the information required to correctly re-summarize the historical data is not stored in the PDB, all new data added to the PDB will be fine.
NB: This variables interpretation type has been updated to provide more meaningful information at the summary levels. Although all new data will be summarized correctly the use of the old summarized data should be used with caution.
This document walks through the process of recording BMC's Patrol data into IT Service vision. The following stages will be covered along with working examples.
BMC Patrol must be installed and collecting data in the UNIX and/or Windows NT environment. The data read into IT Service Vision comes from parameter history data maintained by the PATROL Agent. Refer to your PATROL documentation for more details. The extracted data can come from the Patrol History Loader KM, if installed, or extracted directly from the Patrol agent using the dump_hist command, both formats are recognized by IT Service Vision.
Patrol allows each metric to be sampled at it's own interval, typically 30 seconds, 1 minute, 5 minutes etc. This interval can be set by the Patrol administrator. IT Service Vision requires that the sample rates be specified on minute boundaries, the only exception being that we also recognize 30 second sample rates. (Please refer to Notes on Patrol data and it's summarization into IT Service Vision )
IT Service Vision Server must be installed at Release 2.2 or higher on MVS, Unix or Windows NT Server.
There are two approaches to collecting the PATROL history data to a central location.
Although these 2 methods produce slightly different output, either or both can be processed by IT Service Vision.
It is the PATROL Operator Console that retrieves the historical data stored by the Agent, and the dump_hist line command that dumps the parameter history data maintained by the PATROL Agents. The PATROL Agent Reference Manual contains more detailed information on the dump_hist command.
The following command dumps parameter history data for 1 day a file using the start and end switches for the dump_hist command, the format of which are ddmmhhmm[yy]. Additional switches can be specified that further restrict the amount of data that is extracted. :-
dump_hist -s 0723000098 -e 0723235998 > filename
The following is a small example of the format of the text file created by the above dump_hist command. This is the file that will be passed to %CPPROCES.
nightingale/NT_CPU.CPU_0/CPUprcrUserTimePercent
Thu Jul 23 10:00:57 1998 26.981
Thu Jul 23 10:01:58 1998 5.35963
Thu Jul 23 10:02:58 1998 0.598205
Thu Jul 23 10:03:58 1998 0.333915
nightingale/NT_CPU.CPU_0/CPUprcrPrivTimePercent
Thu Jul 23 10:00:57 1998 61.0279
Thu Jul 23 10:01:58 1998 1.20528
Thu Jul 23 10:02:58 1998 1.56053
Thu Jul 23 10:03:58 1998 1.05312
nightingale/NT_SYSTEM.NT_SYSTEM/SYSsysTotalProcTimePercent
Thu Jul 23 10:00:57 1998 88.013
Thu Jul 23 10:01:58 1998 6.56211
Thu Jul 23 10:02:58 1998 2.1812
Thu Jul 23 10:03:58 1998 1.36592
%cpstart(pdb=pdb-name,root=root-location,access=write,mode=batch,_rc=cpstrc);
%put 'CPSTART Return Code is ' &cpstrc;
%cpcat;
cards4;
add table name=ptntcpu;
add table name=ptlgdsk;
;;;;
%cpcat(cat=work.cpddutl.add.source);
%cpddutl(entrynam=work.cpddutl.add.source);
For MVS use the following %cpstart.
%cpstart(pdb=pdb-name,
root=root-location,
disp=new,
mode=batch
_rc=cpstrc);
Once the tables have been added, dictionary characteristics (age limits, variables kept status) can be modified either using the interactive interface or the %CPDDUTL macro.
The dumped PATROL data can be processed on any platform, MVS, UNIX or Windows NT Server, irrespective of which platform it originated. Once that data is moved to the appropriate platform the processing is identical.
The text file containing the dumped history data should be transferred to the platform on which it will be processed. If using FTP ensure that the data is transferred in ASCII mode.
Note MVS: Typically, PATROL data has variable length records, however, they are assumed not to exceed 200 bytes in length. So allocated an appropriate MVS file with an LRECL of 200.
If you decide to use Input Filtering, we recommend that you do the following before running your first %CPPROCES :-
proc
catalog cat=pgmlib.patrol;
copy out=admin.patrol;
select cpdupchk.source;
quit;
- INT=interval represents the maximum interval allowed between the timestamps on any two consecutive data records from the same system. If the interval between the timestamp values exceeds the value of this parameter a new time range is created. Default is 00:18 - 18 minutes.
- SYSTEMS-number of systems represents an estimate of the maximum number of systems for which the data file will contain data. Default is 50.
- RANGES=number of ranges represents the maximum number of interval ranges that can occur during this execution of %CxPROCES. A new range is created when the difference between the datetime stamps of two consecutive records exceeds the value of the INT= parameter. This break id referred to as a gap in the data. Default is 10.
- KEEP=number of weeks represents the maximum number of weeks for which you want to retain control data. Control data is aged out or removed when the last datetime value in a range exceeds the value of this parameter. Default is 52.
If you do not do the above, the first run of %cpproces with Input Filtering active will copy the default %CPDUPCHK invocation into the admin library automatically, and you will receive the following warning message recommending that you review the %CPDUPCHK parameter values.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING ***
WARNING
WARNING: DO NOT OVERLOOK THIS IMPORTANT WARNING - IT WILL NOT APPEAR AGAIN.
WARNING: A sample invocation of the %CPDUPCHK macro has been copied to
your ADMIN library. You should review its contents before the
next execution. To do so, start IT Service Vision with this
PDB in update mode and type "NOTE ADMIN.PATROL.CPDUPCHK.SOURCE"
on the SAS command line. The only parameter values you need to
review and probably change are the RANGES=, SYSTEMS=, and KEEP=
settings. Review the comments therein for guidance and the
documentation on input filtering for more details.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
If you require more information on Input Filtering please refer to the How To/Macro section from the online help for IT Service Vision.
The following process example should be run after a %CPSTART. For the purpose of this example I have included input filtering in this process run.
%cpproces(,collectr=patrol,rawdata=filename,toolnm=sasds,dupmode=discard,_rc=cpprc);
%put 'CPPROCES return code is ' &cpprc;
%CPREDUCE(,_RC=cprrc);
%put 'CPREDUCE return code is ' &cprrc;
In the SAS log you can expect to see the following :-
+---------------------------------------------------------------------------------------------------+
| IT Service Vision input data duplication check report
|
| ======================================================== |
|
|
| NOTE: All input records for new machine nightingale will be added.
|
|
|
+---------------------------------------------------------------------------------------------------+
The above message will only appear when Input Filtering is active. The message shown will depend on whether the input data is considered duplicate.
==========================================================
The following objects were not kept to be processed but
existed in the input file.Object Name = NT_CACHE
Object Name = NT_MEMORY
Object Name = NT_NETWORK
Object Name = NT_PAGEFILE
Object Name = NT_PHYSICAL_DISKS
Object Name = NT_SECURITY
Object Name = NT_SERVER
Object Name = NT_SYSTEMA record is not processed for the following reasons :-
1 - The ITSV table for this object was not specified
in the PROCESS macro.
2 - The ITSV table for this object is marked KEPT=N
in the PDB.
3 - The object is a new object for which a table
definition needs to be built (see GENERATE SOURCE).
==========================================================
This report is always produced when processing PATROL data. It reports the objects that were found in the rawdata that were not processed. If an object appears in this report for which you want to collect the data for, then you should add the appropriate table to the PDB. If you do not want to keep the data for an object, you can update your collection process to no longer keep the history data. If an object appears for which there is no supplied table, then one can be constructed using the GENERATE SOURCE function of the %CPDDUTL macro and an INTYPE= parameter of PATROL.
Patrol history data has several issues with regards to processing the data into a historical PDB.
Two metrics 'A' and 'B' do not have to be sampled at the same rate. 'A' may be sampled every 1 minute interval and 'B' every 5 minute interval. To combine these 2 metrics into the same observation in the PDB would be invalid as each value should eventually be weighted by the duration (depending on the interpretation type of the metric). To resolve this problem, the staging code of IT Service Vision includes a variable in each Patrol table called DURGRP. DURGRP is a string that represents the duration group that a metric belongs, e.g. in this example, 'A' which is sampled every minute is included in the observation with a DURGRP value of 60 (60 seconds) and 'B' in an observation with a DURGRP of 300 (300 seconds).
The DURGRP variable is only used at the DETAIL level in the BY list to ensure that the metrics are reduced and summarized by their respective DURATION value (assuming that they are weighted by DURATION).
At first, Patrol data in IT Service Vision may appear peculiar as there is the possibility of numerous null values appearing in each observation. The number of DURGRP's and null values will depend on the number of different sample rates applied to metrics that belong to the same table.
In this example, two metrics 'A' and 'B' are both sampled at 1 minute intervals. From the example history data below you can see that the first sample occurred at x for both metrics, however the second sample the datetime stamps are out by a second with 'B' being sample later than 'A'. Obviously the first sample for each metric will be combined into a single observation as the duration and datetime stamps are the same, however this is not the case for the second sample.
nightingale/NT_CPU.CPU_0/A
Thu Jul 23 10:00:57 1998 26.981
Thu Jul 23 10:01:58 1998 5.35963
nightingale/NT_CPU.CPU_0/B
Thu Jul 23 10:00:57 1998 61.0279
Thu Jul 23 10:01:57 1998 1.20528
During the staging of the raw data, IT Service Vision detects that this second sample has related datetime values and collapses the data into one observation. The result of this is that the data in the PDB table is much less sparse, however, the DATETIME and DURATION values are going to be near approximations.
If you are a new customer running IT Service Vision Release 2.2 or you are upgrading to this release and have no existing NTSMF tables defined then you can ignore this section. Also, this conversion does not apply to customers who stage their NTSMF data on MVS using the MXG tool.
If none of the above applies to you then please refer to the NTSMF Conversion documentation.
This document walks through the process of recording Demand Technology's NTSMF data into IT Service vision. The following stages will be covered along with working examples.
Demand Technology's NTSMF must be installed and collecting data. It is recommended that you have installed at least version 2.1.9, although earlier releases are supported (see NTSMF Data Requirements below).
IT Service Vision Server must be installed at Release 2.2 on MVS, Unix or Windows NT Server.
There are three approaches you can use when starting out with NTSMF data.
%cpstart(pdb=pdb-name,root=root-location,access=write,mode=batch,_rc=cpstrc);
%put 'CPSTART Return Code is ' &cpstrc;
%cpcat;
cards4;
add table name=ntcache;
add table name=ntlgdsk;
;;;;
%cpcat(cat=work.cpddutl.add.source);
%cpddutl(entrynam=work.cpddutl.add.source);
For MVS use the following %cpstart.
%cpstart(pdb=pdb-name,
root=root-location,
disp=new,
mode=batch
_rc=cpstrc);
Once the tables have been added, dictionary characteristics (age limits, variables kept status) can be modified either using the interactive interface or the %CPDDUTL macro.
The NTSMF data can be processed on any platform, MVS, UNIX or Windows NT Server, irrespective of which platform it originated. Once that data is moved to the appropriate platform the processing is identical.
For the Enhanced NTSMF support that is provided in IT Service Vision Release 2.2 it is recommended that each NTSMF log file is maintained as a separate log file, that is, we do not recommend that they are concatenated together.
Transfer the file to the appropriate platform so that it retains it's text format.
Unix and PC: Place all the NTSMF log files in a single directory which will be pointed to by the %cpproces macro.
MVS: Place each log file in its own PDS member with the following DCB attributes DSORG=PO,RECFM=VB,LRECL=32756,BLKSIZE=32760. By specifying the PDS name in the %cpproces macro, each member will be picked up and processed.
If you decide to use Input Filtering, we recommend that you do the following before running your first %CPPROCES :-
proc
catalog cat=pgmlib.ntsmf;
copy out=admin.ntsmf;
select cpdupchk.source;
quit;
- INT=interval represents the maximum interval allowed between the timestamps on any two consecutive data records from the same system. If the interval between the timestamp values exceeds the value of this parameter a new time range is created. Default is 00:18 - 18 minutes.
- SYSTEMS-number of systems represents an estimate of the maximum number of systems for which the data file will contain data. Default is 50.
- RANGES=number of ranges represents the maximum number of interval ranges that can occur during this execution of %CxPROCES. A new range is created when the difference between the datetime stamps of two consecutive records exceeds the value of the INT= parameter. This break id referred to as a gap in the data. Default is 10.
- KEEP=number of weeks represents the maximum number of weeks for which you want to retain control data. Control data is aged out or removed when the last datetime value in a range exceeds the value of this parameter. Default is 52.
If you do not do the above, the first run of %cpproces with Input Filtering active will copy the default %CPDUPCHK invocation into the admin library automatically, and you will receive the following warning message recommending that you review the %CPDUPCHK parameter values.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
WARNING: DO NOT OVERLOOK THIS IMPORTANT WARNING - IT WILL NOT APPEAR AGAIN.
WARNING: A sample invocation of the %CPDUPCHK macro has been copied to
your ADMIN library. You should review its contents before the
next execution. To do so, start IT Service Vision with this
PDB in update mode and type "NOTE ADMIN.NTSMF.CPDUPCHK.SOURCE"
on the SAS command line. The only parameter values you need to
review and probably change are the RANGES=, SYSTEMS=, and KEEP=
settings. Review the comments therein for guidance and the
documentation on input filtering for more details.
WARNING: *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING
If you require more information on Input Filtering please refer to the How To/Macro section from the online help for IT Service Vision.
The following process example should be run a %CPSTART. For the purpose of this example I have included input filtering in this process run. The rawdata parameter on the %CPPROCES macro should point to a directory (on Unix and PC) which contains the NTSMF logs to be processed and a PDS (on MVS).
Note: The new revised NTSMF support uses a collector value of NTSMF and a toolnm of SASDS.
%cpproces(,collectr=ntsmf,rawdata=filename,toolnm=sasds,dupmode=discard,_rc=cpprc);
%put 'CPPROCES return code is ' &cpprc;
%CPREDUCE(,_RC=cprrc);
%put 'CPREDUCE return code is ' &cprrc;
In the SAS log you can expect to see the following :-
+-----------------------------------------------------------------------------------------+
| IT Service Vision input data duplication check report |
| =================================================== |
| |
| NOTE: All input records for new machine nightingale will be added. |
| |
+-----------------------------------------------------------------------------------------+
The above message will only appear when Input Filtering is active. The message shown will depend on whether the input data is considered duplicate.
==========================================================
The following objects were not kept to be processed but
existed in the input file.Object Name = FTP Server
Object Name = MSExchangeWEB
Object Name = Memory
Object Name = Paging File
Object Name = PhysicalDisk
Object Name = Process
Object Name = Server
Object Name = Server Work Queues
Object Name = System
Object Name = WINS Server
A record is not processed for the following reasons :-1 - The ITSV table for this object was not specified
in the PROCESS macro.
2 - The ITSV table for this object is marked KEPT=N
in the PDB.
3 - The object is a new object for which a table
definition needs to be built (see GENERATE SOURCE).
==========================================================
This report is always produced when processing NTSMF data. It reports the objects that were found in the rawdata that were not processed. If an object appears in this report for which you want to collect the data for, then you should add the appropriate table to the PDB. If you do not want to keep the data for an object, you can update your collection process to no longer keep the history data. If an object appears for which there is no supplied table, then one can be constructed using the GENERATE SOURCE function of the %CPDDUTL macro and an INTYPE= parameter of NTSMF.
Collectors such as Demand Technology's NTSMF and BMC Patrol have the potential for producing information from a large number of data sources. Although IT Service Vision will typically supply table and variable definitions for the more popular data sources, it is not practical to supply definitions for them all. To address this issue, additional functionality has been added to the GENERATE SOURCE ddutl control statement to assist in creating table and variable definitions.
The process for creating the table and variable definitions is the same for both NTSMF and PATROL, it is only the input that is different.
Assume that you are a customer with NTSMF and a new object has appeared in your NTSMF log that you want to include in your PDB.
The following HP MeasureWare tables have been updated with new variables :-
APPAPCT - APP ACTIVE PCT
APPATME - APP ACTIVE TIME
APPGICT - APP GUI INPUT COUNT
APPGIRT - APP GUI INPUT RATE
APPGKCT - APP GUI KEYBOARD COUNT
APPGKDL - APP GUI KEYBOARD DELAY
APPGKYR - APP GUI KEYBOARD RATE
APPGMCT - APP GUI MOUSE COUNT
APPGMDL - APP GUI MOUSE DELAY
APPGMRT - APP GUI MOUSE RATE
BYDASTM - BYDSK AVG SERVICE TIME
GLBALTH - GBL ALIVE THREAD
GLBDKFR - GBL DISK SPACE FREE
GLBDKSP - GBL DISK SPACE
GLBFILK - TBL FILE LOCK UTIL
GLBFITB - TBL FILE TABLE UTIL
GLBGUCT - GBL GUI INPUT COUNT
GLBGUDL - GBL GUI INPUT DELAY
GLBGUDR - GBL GUI DELAY INDEX
GLBGURT - GBL GUI INPUT RATE
GLBKBCT - GBL GUI KEYBOARD COUNT
GLBKBRT - GBL GUI KEYBOARD RATE
GLBMCPT - GBL MEM COMMIT PCT
GLBMDCR - GBL MEM DISCARD RATE
GLBMDSC - GBL MEM DISCARD
GLBMLDI - GBL MEM LOAD INDEX
GLBMPGI - GBL MEM PAGEIN
GLBMPIR - GBL MEM PAGEIN RATE
GLBMPSR - GBL MEM PG SCAN RATE
GLBMSCT - GBL GUI MOUSE COUNT
GLBMSRT - GBL GUI MOUSE RATE
GLBMSUT - GBL MEM SYS UTIL
GLBNIEP - GBL NET IN ERROR PCT
GLBNOEP - GBL NET OUT ERROR PCT
GLBNOTQ - GBL NET OUTQUEUE
GLBNTBR - GBL NET BYTE RATE
GLBRBRT - GBL RDR BYTE RATE
GLBRRRT - GBL RDR REQUEST RATE
GLBSPMN - GBL PARTITION SPACE MIN
GLBSYUP - GBL SYSTEM UPTIME HOURS
GLBTTOC - GBL TT OVERFLOW COUNT
GLBWBCR - GBL WEB CONNECTION RATE
GLBWBLF - GBL WEB LOGON FAILURES
GLBWCHP - GBL WEB CACHE HIT PCT
GLBWCRR - GBL WEB CGI REQUEST RATE
GLBWFRR - GBL WEB FILES RECEIVED RATE
GLBWFSR - GBL WEB FILES SENT RATE
GLBWGRR - GBL WEB GET REQUEST RATE
GLBWHRR - GBL WEB HEAD REQUEST RATE
GLBWIRR - GBL WEB ISAPI REQUEST RATE
GLBWNFE - GBL WEB NOT FOUND ERRORS
GLBWORR - GBL WEB OTHER REQUEST RATE
GLBWPRR - GBL WEB POST REQUEST RATE
GLBWRBR - GBL WEB READ BYTE RATE
GLBWWBR - GBL WEB WRITE BYTE RATE
TBLBFCU - TBL BUFFER CACHE USED
TBLINCU - TBL INODE CACHE USED
TBLMSTU - TBL MSG TABLE UTIL
TBLPRTU - TBL PROC TABLE UTIL
TBLSHTU - TBL SHMEM TABLE UTIL
TBLSMTU - TBL SEM TABLE UTIL
PROTHCT - PROC THREAD COUNT
TTAPPNM - TT APP NAME
TTAPPTN - TT APP TRAP NAME
TTCLADD - TT CLIENT ADDRESS
TTCLAFT - TT CLIENT ADDRESS FORMAT
TTCLTID - TT CLIENT TRAN ID
TTCTTPT - TT CPU TOTAL TIME PER TRAN
TTDLIPT - TT DISK LOGL IO PER TRAN
TTDPIPT - TT DISK PHYS IO PER TRAN
TTFAILD - TT FAILED
TTINFO - TT INFO
TTTRNID - TT TRAN ID
TTUMAV2 - TT USER MEASUREMENT AVG 2
TTUMAVG - TT USER MEASUREMENT AVG
TTUMMAX - TT USER MEASUREMENT MAX
TTUMMIN - TT USER MEASUREMENT MIN
TTUMMN2 - TT USER MEASUREMENT MIN 2
TTUMMX2 - TT USER MEASUREMENT MAX 2
TTUMNM2 - TT USER MEASUREMENT NAME 2
TTUMNME - TT USER MEASUREMENT NAME
TTUNAME - TT UNAME
If you decide that you want to add these metrics to your existing PCS* tables then you have to perform a MAINTAIN TABLE, this will migrate the variable definitions from the supplied data dictionary to your PDB's dictionary. See Macro reference for more details on the MAINTAIN TABLE functionality.
Although you can add these metrics to your IT Service Vision 2.1 NTSMF tables, they will only be populated if you have run a conversion to use the Enhanced NTSMF support and the software that records these metrics is at the appropriate level.
If you decide that you want to add the following metrics to your existing NT* tables then you have to perform a MAINTAIN TABLE, this will migrate the definitions from the supplied data dictionary to your PDB's dictionary. See Macro reference for more details on the MAINTAIN TABLE functionality.
-------------------------------------- Table name=NTCNCTN --------------------------------------- Description Variable Name MSExchangeMTA Connections: Connector Index CNCINDX MSExchangeMTA Connections: Cumulative Inbound Associations IASCTN0 MSExchangeMTA Connections: Rejected Inbound Associations IASCTN1 MSExchangeMTA Connections: Current Inbound Associations IASCTNS MSExchangeMTA Connections: Inbound Bytes Total INBBTTL MSExchangeMTA Connections: Inbound Messages Total INBMTTL MSExchangeMTA Connections: Inbound Reject Reason INBRRSN MSExchangeMTA Connections: Inbound Rejected Total INRJTTL MSExchangeMTA Connections: Last Inbound Association LIASCTN MSExchangeMTA Connections: Last Outbound Association LOASCTN MSExchangeMTA Connections: Next Association Retry NASCRTR MSExchangeMTA Connections: Cumulative Outbound Associations OASCTN0 MSExchangeMTA Connections: Failed Outbound Associations OASCTN1 MSExchangeMTA Connections: Current Outbound Associations OASCTNS MSExchangeMTA Connections: Oldest Message Queued OLDSMQD MSExchangeMTA Connections: Outbound Bytes Total OTBBTTL MSExchangeMTA Connections: Outbound Failure Reason OTBFRSN MSExchangeMTA Connections: Outbound Messages Total OTBMTTL MSExchangeMTA Connections: Queued Bytes QDBYTES MSExchangeMTA Connections: Total Recipients Queued TRCPNQD MSExchangeMTA Connections: Total Recipients Inbound TRINBND MSExchangeMTA Connections: Total Recipients Outbound TROTBND -------------------------------------- Table name=NTMSEIS --------------------------------------- Description Variable Name MSExchangeIS: IMAP Commands Issued Rate MAPCIRT MSExchangeIS: IMAP Commands Issued MAPCISD MSExchangeIS: IMAP Messages Sent MAPMSNT MSExchangeIS: IMAP Message Send Rate MAPMSRT MSExchangeIS: Newsfeed Inbound Rejected Messages NIRMSGS MSExchangeIS: NNTP Messages Read NNTPMRD MSExchangeIS: Newsfeed Outbound Rejected Messages NORMSGS MSExchangeIS: NNTP Commands Issued Rate NTPCIRT MSExchangeIS: NNTP Commands Issued NTPCISD MSExchangeIS: NNTP Failed Posts Rate NTPFPRT MSExchangeIS: NNTP Messages Posted Rate NTPMPRT MSExchangeIS: NNTP Messages Read Rate NTPMRRT MSExchangeIS: Newsfeed Inbound Rejected Messages Rate NWIRMRT MSExchangeIS: Newsfeed Messages Received NWMRCVD MSExchangeIS: Newsfeed Bytes Sent NWSBSNT MSExchangeIS: Newsfeed Bytes Sent/sec NWSBSSC MSExchangeIS: Newsfeed Messages Received Rate NWSMRRT MSExchangeIS: Newsfeed Messages Sent NWSMSNT MSExchangeIS: Newsfeed Messages Sent/sec NWSMSSC MSExchangeIS: NNTP Current Outbound Connections OCNCTN0 MSExchangeIS: NNTP Outbound Connections OCNCTNS MSExchangeIS: POP3 Commands Issued Rate POPCIRT MSExchangeIS: POP3 Commands Issued POPCISD MSExchangeIS: POP3 Messages Sent POPMSNT MSExchangeIS: POP3 Messages Send Rate POPMSRT MSExchangeIS: NNTP Failed Posts TPFPSTS MSExchangeIS: NNTP Messages Posted TPMPSTD MSExchangeIS: Number of article index table rows expired TREXPRD -------------------------------------- Table name=NTPRTCL --------------------------------------- Description Variable Name MSExchange Internet Protocols: Incoming Queue Size INCMQSZ MSExchange Internet Protocols: Outstanding Commands OTCMNDS MSExchange Internet Protocols: Outgoing Queue Size OTGNQSZ MSExchange Internet Protocols: Total Commands TTCMNDS -------------------------------------- Table name=NTSEIMC --------------------------------------- Description Variable Name MSExchangeIMC: Total Failed Conversions CNVRSN0 MSExchangeIMC: Total Successful Conversions CNVRSNS MSExchangeIMC: Total Inbound Recipients IRCPNTS MSExchangeIMC: Total Outbound Recipients ORCPNTS MSExchangeIMC: Total Loops Detected TLDTCTD MSExchangeIMC: Total Recipients Queued TRCPNQD MSExchangeIMC: Total Kilobytes Queued TTKLBQD MSExchangeIMC: Total Messages Queued TTMSGQD -------------------------------------- Table name=NTSEMTA --------------------------------------- Description Variable Name MSExchangeMTA: Total Failed Conversions CNVRSN0 MSExchangeMTA: Total Successful Conversions CNVRSNS MSExchangeMTA: Deferred Delivery Msgs DFDMSGS MSExchangeMTA: Inbound Bytes Total INBBTTL MSExchangeMTA: Inbound Messages Total INBMTTL MSExchangeMTA: Outbound Bytes Total OTBBTTL MSExchangeMTA: Outbound Messages Total OTBMTTL MSExchangeMTA: Total Loops Detected TLDTCTD MSExchangeMTA: Total Recipients Queued TRCPNQD MSExchangeMTA: Total Recipients Inbound TRINBND MSExchangeMTA: Total Recipients Outbound TROTBND MSExchangeMTA: Work Queue Bytes WRKQBTS -------------------------------------- Table name=NTSPBLC --------------------------------------- Description Variable Name MSExchangeIS Public: Total Count of Recoverable Items CORITMS MSExchangeIS Public: Number of messages expired from public folders FPFLDRS MSExchangeIS Public: Replication Receive Queue Size RPLRQSZ MSExchangeIS Public: Total Size of Recoverable Items SORITMS -------------------------------------- Table name=NTSPRVT --------------------------------------- Description Variable Name MSExchangeIS Private: Total Count of Recoverable Items CORITMS MSExchangeIS Private: Local deliveries LCDLVRS MSExchangeIS Private: Local delivery rate LCDLVRT MSExchangeIS Private: Total Size of Recoverable Items SORITMS -------------------------------------- Table name=NTSSRVC --------------------------------------- Description Variable Name Web Proxy Server Service: Array Bytes Received/sec ABTRCSC Web Proxy Server Service: Array Bytes Sent/sec ABTSNSC Web Proxy Server Service: Array Bytes Total/sec ABTTTSC Web Proxy Server Service: Current Average Milliseconds/request CAMRQST Web Proxy Server Service: Failing Requests/sec FLRQSSC Web Proxy Server Service: Requests/sec RQSTSSC Web Proxy Server Service: Reverse Bytes Received/sec RVBRCSC Web Proxy Server Service: Reverse Bytes Sent/sec RVRBSSC Web Proxy Server Service: Reverse Bytes Total/sec RVRBTSC Web Proxy Server Service: Socks Client Bytes Received/sec SCCBRSC Web Proxy Server Service: Socks Client Bytes Sent/sec SCCBSSC Web Proxy Server Service: Socks Client Bytes Total/sec SCCBTSC Web Proxy Server Service: Socks sessions SCKSSNS Web Proxy Server Service: Total Array Fetches TAFTCHS Web Proxy Server Service: Total Failed Socks Sessions TFSSSNS Web Proxy Server Service: Total Reverse Fetches TRFTCHS Web Proxy Server Service: Total Socks Sessions TSCSSNS Web Proxy Server Service: Total Successful Socks Sessions TSSSSNS Web Proxy Server Service: Total Upstream Fetches TUFTCHS Web Proxy Server Service: Upstream Bytes Received/sec UPSBRSC Web Proxy Server Service: Upstream Bytes Sent/sec UPSBSSC Web Proxy Server Service: Upstream Bytes Total/sec UPSBTSC