Browse Prior Art Database

COLLECTING AND CATEGORIZING SOFTWARE ERROR DATA, IN AN INDUSTRIAL ENVIRONMENT

IP.com Disclosure Number: IPCOM000128212D
Original Publication Date: 1982-Dec-31
Included in the Prior Art Database: 2005-Sep-15
Document File: 10 page(s) / 40K

Publishing Venue

Software Patent Institute

Related People

THOMAS J. OSTRAND: AUTHOR [+4]

Abstract

The assessment of many software development and validation methods suffers from a lack of factual and statistical information characterizing their effectiveness, particularly in a real production environment. For the past two years, we have been involved in the design and execution of a study of software errors at Sperry Univac. We have gathered information describing types of errors, the methods used to detect their presence and isolate problems, and the difficulty of detection, isolation, and correction. We also attempted to determine when an error enters the product being developed, how much time elapses until it is detected, and perhaps most difficult, to determine the real underlying cause of the error's existence. The study's specific goals include finding answers to the following questions: -- how do the types, frequencies, and difficulty of detection and isolation of errors vary over the steps of development and use of software? -- can the trends in software error statistics reported in earlier studies be verified? -- can a.usable and practical method of error categorization be developed? In addition, error information collected now will later be used as baseline data for evaluating the effect on the number and types of errors of different software development methods and tools. _2_ In this paper we report the results obtained from collecting error data for about nine months. One medium sized project has been followed from approximately midway through coding to the completion of system testing. The product consists of about 10000 lines of high level source code and 70000 bytes of object code. Program design and coding for the project were done by three programmers over ten months, after the initial specification had been completed. The implementation represents approximately 1$ person-months of effort. An additional five months were used for function and system testing by independent test groups, and for the consequent changes to the product: The product has now passed both function and system testing and has just been released to customers. .

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 9% of the total text.

Page 1 of 10

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

COLLECTING AND CATEGORIZING SOFTWARE ERROR DATA, IN AN INDUSTRIAL ENVIRONMENT

BY THOMAS J. OSTRAND* AND ELAINE J. WEYUKER**

TECHNICAL REPORT #4 AUGUST 1982 7 Systems and Software Research Sperry Univac Blue Bell, PA 19424

Department of Computer Science Courant Institute of Mathematical Sciences New York University - 251 Mercer Street New York, New York This research was supported in part by the National Science Foundation under Grant MCS-82-001167.

1. Introduction

The assessment of many software development and validation methods suffers from a lack of factual and statistical information characterizing their effectiveness, particularly in a real production environment. For the past two years, we have been involved in the design and execution of a study of software errors at Sperry Univac. We have gathered information describing types of errors, the methods used to detect their presence and isolate problems, and the difficulty of detection, isolation, and correction. We also attempted to determine when an error enters the product being developed, how much time elapses until it is detected, and perhaps most difficult, to determine the real underlying cause of the error's existence. The study's specific goals include finding answers to the following questions: -- how do the types, frequencies, and difficulty of detection and isolation of errors vary over the steps of development and use of software? -- can the trends in software error statistics reported in earlier studies be verified? -- can a.usable and practical method of error categorization be developed? In addition, error information collected now will later be used as baseline data for evaluating the effect on the number and types of errors of different software development methods and tools. _2_

In this paper we report the results obtained from collecting error data for about nine months. One medium sized project has been followed from approximately midway through coding to the completion of system testing. The product consists of about 10000 lines of high level source code and 70000 bytes of object code. Program design and coding for the project were done by three programmers over ten months, after the initial specification had been completed. The implementation represents approximately 1$ person-months of effort. An additional five months were used for function and system testing by independent test groups, and for the consequent changes to the product: The product has now passed both function and system testing and has just been released to customers. .

2. The Collection of Data

Previous error studies have given us valuable insights into the way to organize data collection, and the type of information to collect [ 3, 5, 8, 10, 13, 1 4, 16, 17, 18, 20, 21 ] . We have drawn especially from Basili [4], Basili et al [61, Thayer et al [18J, and Weiss [20, 21] in developing our collection goals and method. Our two-...