Browse Prior Art Database

Assuring software quality by automatic generation of test cases and automatic analysis of test case results for derivation and regression error determination

IP.com Disclosure Number: IPCOM000180300D
Original Publication Date: 2009-Mar-06
Included in the Prior Art Database: 2009-Mar-06
Document File: 5 page(s) / 31K

Publishing Venue

IBM

Abstract

In order to assure quality of a given software it is required to perform regression tests regularly and evaluate the results. This is done manually and therefore a costly procedure. This article describes how the lion's share of this process can be automated. First regression tests are created automatically by introducing new statements in the source code which store a snapshot of the current variables into a database. These values are compared with stored values from a reference level to identify problem areas and derivations. Secondly the results of existing and more sophisticated function verification test cases are used to determine a semantic map of the methods in the source code. The main idea is that errors in certain methods also lead to errors in semantically dependent methods. This semantical map is used to quickly identify the root cause of the problem.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 29% of the total text.

Page 1 of 5

Assuring software quality by automatic generation of test cases and automatic analysis of test case results for derivation and regression error determination

Automatic generation of regression tests

In real life scenarios parts of software development are outsourced so different development teams work on different components at the same time. One development team relies on the behavior of the other component. Because of the local separation of the teams it often occurs that changes in one component cause regressions in the other component. The root cause for this problem is often not evident. It is costly to analyze the test cases in order to find the root cause. So there is demand for a set of reference tests just for a single method on a component just to ensure that the expected behavior does not change.

We want to achieve an automatic evaluation of test cases to determine possible regressions. The result should be a list containing a rating of existing methods in the given source code. The rating must indicate the probability that an error was introduced in this method and therefore this method is the root cause for a failing test case. The existing approaches focus on comparison of given trace files. If they detect a deviation either in the sequence of the called methods or in the number a method was called the respective method is considered to cause the problem. In reality there are cases in which the sequence of the called methods in a trace file or the number of calls did not change. This happens if the implementation of a method was changed. This is the common case in real life scenarios. Unfortunately existing approaches cannot perform an analysis in this case since the trace did not change.

The following steps describe how such situation can be analyzed. The input is a set of trace files (Ti ) describing which methods were called by a test case and if the test case was successful.

For example:

T1 = A B C F G E S H ok

T2 = A B C D A E S H .. error

Each Symbol represents a method in the source code.

The main idea is to store the input and output values of a method and compare them to a predefined reference. All methods in a source code implement any of the following semantics
1) x := f(y)
2) f(y)

In the first case both variables x and y are stored and in the second case it is sufficient to store y. The variables can also represent a vector of two or more variables.

For this a new class is introduced which manages all of the storage and comparison tasks called the comparer. The comparer is implemented as a singleton to assure that there is only one instance. Each method in the source code is then modified that the newly component is called.

1

Page 2 of 5

For example:
doX(X) returns y { comparer.storeEntry(x, IndexOfThisMethod)

..... Code of method

comparer.storeExit(y,IndexOfThisMethod) }

or

doX(x) {

comparer.storeEntry(x, IndexOfThisMethod)

..... Code of method

comparer.storeExit(x,I...