Browse Prior Art Database

Error Detection Using Path Testing and Static Analysis

IP.com Disclosure Number: IPCOM000131432D
Original Publication Date: 1979-Aug-01
Included in the Prior Art Database: 2005-Nov-11
Document File: 6 page(s) / 28K

Publishing Venue

Software Patent Institute

Related People

Carolyn Gannon: AUTHOR [+3]

Abstract

[Figure containing following caption omitted: How many types of errors can be detected through static analysis and branch testing? How many man- hours and machine hours do these techniques require? Here are some empirically determined answers.] Two software testing techniques -- static analysis and dynamic path (branch) testing' -- are receiving a great deal of attention in the world of software engineering these days. However, empirical evidence of their ability to detect errors is very limited, as is data concerning the resource investment their use ret quires. Researchers such as Goodenough2 and Howden3 have estimated or graded these testing methods, as well as such other techniques as interface consistency, symbolic testing, and special values testing. However, this paper seeks (1) to demonstrate empirically the types of errors one can expect to uncover and (2) to measure the engineering and computer time which may be required by the two testing techniques for each class of errors during system-level testing.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 16% of the total text.

Page 1 of 6

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

This record contains textual material that is copyright ©; 1979 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Contact the IEEE Computer Society http://www.computer.org/ (714-821-8380) for copies of the complete work that was the source of this textual material and for all use beyond that as a record from the SPI Database.

Error Detection Using Path Testing and Static Analysis

Carolyn Gannon

General Research Corporation

(Image Omitted: How many types of errors can be detected through static analysis and branch testing? How many man- hours and machine hours do these techniques require? Here are some empirically determined answers.)

Two software testing techniques -- static analysis and dynamic path (branch) testing' -- are receiving a great deal of attention in the world of software engineering these days. However, empirical evidence of their ability to detect errors is very limited, as is data concerning the resource investment their use ret quires. Researchers such as Goodenough2 and Howden3 have estimated or graded these testing methods, as well as such other techniques as interface consistency, symbolic testing, and special values testing. However, this paper seeks (1) to demonstrate empirically the types of errors one can expect to uncover and (2) to measure the engineering and computer time which may be required by the two testing techniques for each class of errors during system-level testing.

The experiment

To provide the material for our testing demonstra tion, a medium-size, 5000-source- statement Fortran program was seeded with errors one at a time, and analyzed by a testing tools that has both static analysis and dynamic path analysis capabilities.

The test object. The test object program was chosen carefully. In addition to size and functional variety, we needed a program as error-free as possible to eliminate camouflaging the seeded errors. The Fortran programs chosen consists of 56 utility routines that compute coordinate translation, integration, flight maneuvers, input and output handling, and data base presetting and definition. The collection of routines has been extensively used in software projects for over 10 years. The main program directs the

functional action according to the user's input data. The program is highly computational and has come plex control logic. Sample data and expected output were provided with the program's functional description, but specifications and requirements for the program's algorithms were not available. For this experiment, the correct output for the supplied data sets was used as a specification of proper program behavior.

Error seeding.

Of the several studies describing error types and frequencies, TRW's6 is the most relevant. We have used the Project 5 data from that report as the basis for selecting error types and frequencies. Several error categories (namely documentation,...