FOCUS AREAS

Hot Topics

Related Links

Validation

Migration

Designing Custom Validation Tests

Most existing SAS sites run custom tests at installation, and periodically thereafter, to validate the behavior of the software. Even though SAS provides some excellent self-validating tests (see the previous page), you are advised to continue to use your own custom tests, especially if they focus on a business-critical process.

Designing Benchmarked Tests

A good way to validate software migration is to benchmark the software's behavior in the source and then compare that benchmark to the behavior of the target. The most effective way to collect benchmark measurements depends on whether the SAS code runs in batch or under a graphic user interface, how output is rendered, and what data structures are processed. The rest of this page gives you some ideas, but you will want to consult with experts or texts about general testing techniques.

Best practice

Create a sample subset of data. Often, you can accomplish more testing in a given period of time by using a smaller volume of data. (On the other hand, this might not be advisable if very large data sets are common in your processes.) Retain the test data in the target installation so that you can revalidate at any later date, for example, after applying a hotfix or a maintainance release.

Creating Benchmarks for Batch SAS Programs

For a batch SAS job, you can make benchmarks from saved output files, such as log files, listing files, and reports. Some user-defined benchmarked tests are supported by the SAS Operational Qualification Tool (SAS OQ; formerly the FTT).

Creating Benchmarks for Interactive SAS Applications

For an interactive application, creating benchmarks is more complex. One method is to design a script of a typical process flow, and benchmark the resulting user interface elements. This is called a simulated usage scenario. Design the script to consist of an ordered series of user input events that should produce the same results each time the script is followed.

To benchmark the application's behavior, you can take screen captures of important windows, save HTML source from a browser page, or save reports and other output. In deciding what behavior to study, the color of a button or its exact labeling might not be significant, for example, but you would want to record the contents of a text field that appears after the button is pushed.

Best practice

Use existing test scripts. If you created the custom SAS application under a software development methodology, then you probably designed a validation plan to test the finished application. You can use those same test scripts and results as benchmarks for validating the migration.

Comparing Source to Target

Complete the test by running the same tests against the same data in the target installation. To compare the benchmarked source output to the target output, you can visually compare output, run PROC CONTENTS in source and then in target, run PROC COMPARE between source and target, or use a third-party differencing tool.

Running a Pilot or Beta Test

Running a pilot test means releasing the target installation in beta mode. You make the software environment available to a limited number of users who pay careful attention to output and logs, reporting any errors.

Validating Multiuser Environments

If your application supports multiple simultaneous users (for example, with SAS/SHARE software), then take time to verify that functionality. Even after all single-user testing has been completed, new problems can arise under a multiuser load.