Browse Prior Art Database

Experience with Automated Testing Analysis

IP.com Disclosure Number: IPCOM000131433D
Original Publication Date: 1979-Aug-01
Included in the Prior Art Database: 2005-Nov-11
Document File: 6 page(s) / 27K

Publishing Venue

Software Patent Institute

Related People

Mark A. Holthouse: AUTHOR [+4]

Abstract

The Analytic Sciences Corp. [Figure containing following caption omitted: Automated testing analyzers are popular software test tools. Such an analyzer, using a branch testing strategy, provides a cost-effective way of increasing confidence in software behavior.] One of the more popular tools for supporting software testing is the automated testing analyzer, or ATA /also referred to as an ";execution verifier"; or ";automated verification system'').] A number of such tools are presently in use in a variety of applications, and perform functions such as static analysis, assertion processing, and test data generation. Most provide a testing ";coverage"; based on the effect a set of tests has on the internal control flow of the program under test. This article describes some experience with using an ATA tool to measure testing coverage according to a particular measure -- that of ";branch testing"; (or ";decision-to-decision path testing";). The emphasis is not on the theoretical reliability of this test strategy -- Howden offers a good discussion of this aspects We instead discuss the benefits and problems associated with the technique, drawing from experience in using the tool on several medium-sized software development efforts.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 19% of the total text.

Page 1 of 6

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

This record contains textual material that is copyright ©; 1979 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Contact the IEEE Computer Society http://www.computer.org/ (714-821-8380) for copies of the complete work that was the source of this textual material and for all use beyond that as a record from the SPI Database.

Experience with Automated Testing Analysis

Mark A. Holthouse

Mark J. Hatch

The Analytic Sciences Corp.

(Image Omitted: Automated testing analyzers are popular software test tools. Such an analyzer, using a branch testing strategy, provides a cost-effective way of increasing confidence in software behavior.)

One of the more popular tools for supporting software testing is the automated testing analyzer, or ATA /also referred to as an "execution verifier" or "automated verification system'').] A number of such tools are presently in use in a variety of applications, and perform functions such as static analysis, assertion processing, and test data generation. Most provide a testing "coverage" based on the effect a set of tests has on the internal control flow of the program under test.

This article describes some experience with using an ATA tool to measure testing coverage according to a particular measure -- that of "branch testing" (or "decision-to-decision path testing"). The emphasis is not on the theoretical reliability of this test strategy -- Howden offers a good discussion of this aspects We instead discuss the benefits and problems associated with the technique, drawing from experience in using the tool on several medium-sized software development efforts.

Background

Automated testing analyzers and their associated branch testing strategy developed out of a need to provide some structure to what was often an ad hoc method of software testing. They rely on an underlying model of a program and related error conditions. This model is called a conceptual fault model.

Conceptual fault model.

The purpose of testing is to remove errors in a software product. A single software test consists of an input stimulus and a corresponding output. A comprehensive test would therefore drive the program with all possible inputs and show all outputs to be correct. Since any nontrivial software product has an infinite or nearinfinite number of inputs, this simple model of software as an input-output black box is insufficient for effective testing.

Testing for errors, however, relates primarily to unexpected events in the program, that is, events which are inadvertently built-in, incorrectly built-in, or inadvertently not built-in. By viewing software through its control structure (often modeled as a directed graph), events can be related to a particular path, or sequence, of program instructions to be exeecuted. The components of a test thus include not only its input and output, but also its corresponding

IEEE Computer Socie...