Browse Prior Art Database

Testing Numerical Software

IP.com Disclosure Number: IPCOM000128304D
Original Publication Date: 1983-Dec-31
Included in the Prior Art Database: 2005-Sep-15
Document File: 10 page(s) / 37K

Publishing Venue

Software Patent Institute

Related People

Timothy A. Budd: AUTHOR [+4]

Abstract

Some errors in numerical software (i.e., programs to perform floating-point computa-tion) are quite easy to detect, while others are extremely elusive. This paper presents empirical evidence to support several claims about the degree of difficulty of detecting certain types of errors in certain types of numerical programs using certain types of data. To formulate the claims, we propose tentative classifications of errors and software; to corroborate the claims, we introduce an experimental method.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 11% of the total text.

Page 1 of 10

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

Testing Numerical Software*

Timothy A. Budd

Webb Mifler

TR 83-18

ABSTRACT

Some errors in numerical software (i.e., programs to perform floating-point computa-tion) are quite easy to detect, while others are extremely elusive. This paper presents empirical evidence to support several claims about the degree of difficulty of detecting certain types of errors in certain types of numerical programs using certain types of data. To formulate the claims, we propose tentative classifications of errors and software; to corroborate the claims, we introduce an experimental method.

November 6, 1983

Department of Computer Science

The University of Arizona

Tucson, Arizona 85721

*This work was supported in part by the National Science Foundation under Grants MCS- 8109547 and MCS-7926441..

1. Introduction

How difficult is it to detect a given type of program error using test input8? The question can be con-strued in the following ways, depending on whether our interest lies in exposing, recognizing orr repairing the error.

1. Given a program, containing a typical error of the type in question, what is the likelihood that a typical set of data will cause the program to execute in an unacceptable manner?

2. How hard is it to decide if a given, set of data causes a given program to execute in an unacceptable manner?

3. How difficult is it to determine a program modification that will rectify a given example of unacceptable program behavior? All of these questions are important, but the last two are outside the scope of this paper. Except for [161, question 2 has been largely ignored in the software engineering literature. With numerical software there may not exist a foolproof procedure to determine acceptability of a solution, or it such a procedure exists it may be too complex or time-consuming to use in practice 1141 ' Although the work described in this paper

University of Arizona Page 1 Dec 31, 1983

Page 2 of 10

Testing Numerical Software

requires that acceptability criteria be given for certain computations, we have made no attempts to advance the understanding of such questions. Similarly, we will not treat question 3, which depends on such factors as the programmer's familiarity with the code, algorithm, problem domain and testing tech-niques 19, 11). Our interest will be with questions of type 1. In general the answer to such a question is difficult to obtain since it requires information about the distributions of errors and test values in practice. Three types of evidence will be offered to support our claims.

1. ("Natural errors") Errors that have been discovered in production software will be examined.

2. ("Experiments or analyses using a fixed error and random data") Given a program containing an error of the type in question, we will determine the size of the space of input values on which the program will perform unacceptably because of this error.

3. ("Experiments with...