Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

System for Ranking Test Case Unexpected Results

IP.com Disclosure Number: IPCOM000237172D
Publication Date: 2014-Jun-06
Document File: 4 page(s) / 212K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method of processing multiple inputs to evaluate the probability that a software test failure is due to a product error. The core idea of this method is to order test case results using a variety of input data, where the highest ranked test results have the highest probability of being related to a code defect.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 4

System for Ranking Test Case Unexpected Results

Test case automation is a vital part of efficient software development , particularly for regression testing. Regression test buckets can grow to include thousands of test cases over the product lifecycle. Most automation contains a test result validation phase, which flags each result as valid or invalid (i.e., needing more investigation).

A complete regression run could generate hundreds of invalid results (i.e., invalids). Evaluating these invalids is a time consuming manual process. Some invalids are related to code defects and others may just be related to changes in the test environment.

Described herein is a method for ranking the invalid results that assigns the highest ranking to results that are most likely related to code defects. The method enables a tester to process results in ranked order, leading to faster defect identification and resolution, thereby improving productivity and reducing schedule risk.

The core idea of this method is to order test case results using a variety of input data, where the highest ranked test results have the highest probability of being related to a code defect. These inputs include information such as:


• A mapping of historical defect rates for particular test cases


• A mapping of historical defect rate for particular code modules


• A test case coverage mapping relating test cases to code modules


• A list of modified code modules from recent builds


• Tester-created rules based on the analysis of unexpected results

The novelty of this idea is the method of processing multiple inputs to evaluate the probability that a test failure is due to a product error.

Prior art describes methods for determining which subset of test cases to run or automatically determining risk areas to retest. Both of these methods only apply to choosing which tests to run. The new method addresses sifting through a large amount of potential errors generated by a test run and ranking possible test failures .

This method is based on the following assumptions:

• A Test Case is executed in a test environment, each producing a Test Result
• A Test Result is stored in a data repository along with a Test Case identifier


• A Test Result can be inspected to determine Test Case success or failure


• A Test Result showing a known success has been generated and saved during Test Case creation

These known-successful Test Results and related Test Case identifiers exist in a data repository. The usefulness of this invention increases as the set of Test Results increases.

The invention is comprised of a program that evaluates and ranks the Test Results as described below. User-defined weightings are determined via experience, intuition, and experimentation.

1. The program compares each new Test Result to the known-successful Test

Result. This can be easily accomplished with known tools.

1


Page 02 of 4

A. If no differences are found, then Test Result is marked as suc...