Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Program Testing: Guest Editor's Introduction

IP.com Disclosure Number: IPCOM000131291D
Original Publication Date: 1978-Apr-01
Included in the Prior Art Database: 2005-Nov-10
Document File: 7 page(s) / 25K

Publishing Venue

Software Patent Institute

Related People

Edward F. Miller, Jr.: AUTHOR [+3]

Abstract

What is testing? Nearly everyone in the computer business is concerned about quality. It's probably fair to say there'll be a pot of gold or its currency equivalent awaiting the first person to figure out how to package up some ";software quality"; and provide it in large quantities at substantial OEM discounts! For computer software systems, quality appears to be a characteristic that can be neither built in with assurance at system creation time nor retrofitted with certainty after a product is in use. Apart from the issues of logistics (copying and distributing software) there appears to be a continuing need to demonstrate some minimum level of quality in typical wideuse software systems. At the same time this appears to be impossible to accomplish: the best designed systems have had errors revealed many years after introduction, and several programs that had been publicly ";proved correct"; contained outright mistakes!''2 Program testing as a discipline appears to stand midway between wide user acceptance and a notquite-practical laboratory curiosity. Certainly it can be argued that program testing ";works"; in the sense that many millions of lines of software -- the programs that keep the world working -- didn't get to that state by any other technical means (such as having been proved correct)! In this sense the task of researchers is to discover what was done and to design methods that accomplish the same (or better) effects at less cost.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 24% of the total text.

Page 1 of 7

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

This record contains textual material that is copyright ©; 1978 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Contact the IEEE Computer Society http://www.computer.org/ (714-821-8380) for copies of the complete work that was the source of this textual material and for all use beyond that as a record from the SPI Database.

Program Testing: Guest Editor's Introduction

Edward F. Miller, Jr.

Software Research Associates

What is testing?

Nearly everyone in the computer business is concerned about quality. It's probably fair to say there'll be a pot of gold or its currency equivalent awaiting the first person to figure out how to package up some "software quality" and provide it in large quantities at substantial OEM discounts!

For computer software systems, quality appears to be a characteristic that can be neither built in with assurance at system creation time nor retrofitted with certainty after a product is in use. Apart from the issues of logistics (copying and distributing software) there appears to be a continuing need to demonstrate some minimum level of quality in typical wideuse software systems. At the same time this appears to be impossible to accomplish: the best designed systems have had errors revealed many years after introduction, and several programs that had been publicly "proved correct" contained outright mistakes!''2

Program testing as a discipline appears to stand midway between wide user acceptance and a notquite-practical laboratory curiosity. Certainly it can be argued that program testing "works" in the sense that many millions of lines of software -- the programs that keep the world working -- didn't get to that state by any other technical means (such as having been proved correct)! In this sense the task of researchers is to discover what was done and to design methods that accomplish the same (or better) effects at less cost.

Elements of testing technology

The technology of testing computer programs appears to divide into sortie natural categories, all of which contribute to the basic objective of systematically analyzing the actual behavior of programs:

Static analysis seeks to demonstrate the truth of certain allegations about program properties without necessarily having to execute the programs.

Dynamic analysis seeks to understand the internal relationships between a program test and the parts of a program that are activated (exercised) during the test. Test case design attempts to figure out how to construct and/or organize tests to get the best testing effect "highest likelihood of discovering errors) with the least effort.

Symbolic evaluation attempts to determine properties of programs (with a quality level quite close to proof-of- correctness) without actually executing them.

Automated tools provide the technical means to set up, measure, record, and archive the results of testing.

IEEE Computer...