The SAS Model Manager
model comparison, validation, and summary reports are tools that you
can use to evaluate and compare the candidate models in a version
or across versions to help you select and approve the champion model
that moves to production status. The SAS Model Manager model comparison
reports are analytical tools that project managers, statisticians,
and analysts can use to assess the structure, performance, and resilience
of candidate models. The model validation reports use statistical
measures to validate the stability, performance, and calibration of
Basel II risk models and parameters. The training summary data set
report creates frequency and distribution charts that summarize the
train table variables.
The reports present
information about a number of attributes that can affect model performance.
Together, the reports provide qualified information that can serve
as the analytical basis for choosing and monitoring a champion model.
Here is a description
of the comparison reports:
For a single model,
this report displays the profile data that is associated with input,
output, and target variables. Profile data includes the variable name,
type, length, label, SAS format, measurement level, and role.
This report compares
the profile data for two models and notes the differences.
The Dynamic Lift report
provides visual summaries of the performance of one or more models
for predicting a binary outcome variable.
The Interval Target
Variable report creates two plots for you to view the actual versus
predicted values for a model and the actual versus residual values
for a model. Interval Target Variable report can be created only for
prediction models.
These are the Basel
II model validation reports:
The Loss Given Default
(LGD) report calculates the amount that might be lost in an investment
and calculates the economic or regulatory capital for Basel II compliance.
The Probability of
Default (PD) Validation report estimates the probability of defaulting
on a debt that is owed. Probability of default is used to calculate
economic or regulatory capital for Basel II compliance.
The model validation
reports use statistical measures that report on these model validation
measures:
-
The model stability measures track
the change in distribution for the modeling data and scoring data.
-
The model performance measures
check the model’s ability to distinguish between accounts that
have not defaulted and accounts that have defaulted, as well as report
on the relationship between actual default probability and predicted
default probability.
-
The model calibration measures
check the accuracy of the selected models for the LGD and the PD reports
by comparing the correct quantification of the risk components with
the available standards.
This is the train table
data set summary report:
The Training Summary
Data Set report creates frequency and distribution charts to validate
data set variables.
You create the model
comparison, validation, and summary reports using the
New
Report window that you start from a version's
Reports node: