Browse Prior Art Database

A mechanism for determining a level of confidence for software test activities that do not complete successfully.

IP.com Disclosure Number: IPCOM000246360D
Publication Date: 2016-Jun-02
Document File: 4 page(s) / 106K

Publishing Venue

The IP.com Prior Art Database

Abstract

This disclosure describes a mechanism that can be used to give a confidence rating for the result of running a test suite, where the test suite does not complete one hundred percent successfully. It uses historical information and current results to determine whether failed tests are likely to be real issues.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 37% of the total text.

Page 01 of 4

A mechanism for determining a level of confidence for software test activities that do not complete successfully.

Disclosed is a system to accept the result of running a test suite as successful, even

when the pass rate of that test suite run is less than one hundred percent. Typically if a test suite run contains failures, it is rejected as failed. However, the system described here would allow runs of a test suite to be accepted as passed if it can be determined that the failures are not indicating problems in the product being tested.

    Most computer software systems rely on running test activities in order to prove that they function correctly. The more tests that there are in a system, the more likely that a test may fail due to some issue that does not necessarily mean that the test case has shown a problem in the product. For example, if a test case relies on an external database, then if that database instance is not available for some reason, the test will fail. This causes the full test run to fail as there is not a 100% pass rate. If the test run is part of a DevOps/build pipeline, then this failure can cause the whole pipeline to be stopped, even though there was no actual problem in the product.

    This disclosure is a mechanism to allow test runs to be accepted as successful, even when the pass rate is not 100%. Using a combination of manual tagging and historical information, a confidence rating can be determined for a test run where all tests do not complete successfully. Instead of just allowing an absolute test pass rate of say 98%, this confidence rating will take into account the specific test cases that failed and determine whether or not they are likely to be real issues

with the product.

    This confidence rating will take into account how and why the test has failed previously and whether it has any relationship to other test cases. For example, suppose all the tests that fail use a specific database instance. If that database instance has been historically unreliable, then it is likely that it is the database instance that was just not available, and the multiple failures were not necessarily significant. If, however, one test fails out of a set that use the same database instance, then that single failure is likely to indicate a problem. A user can specify

what confidence rating they will accept as being successful.

    In order to generate this confidence rating, there are a number of attributes of a failed test or tests that need to be taken into account. These include:

1. Whether the failure is likely to be caused by a coding issue.

2. Whether the failure is likely to be caused by an environmental issue that is expected under some circumstances.

3. Whether a set of tests, that all use the same environment, all fail for the same environmental reason.

4. Using historical information from previous test runs to determine whether a test that has failed for an environmental reason could be accepted as OK.

5. Whether a...