Browse Prior Art Database

A Method and System to Minimize the Cost of Test Execution by Identifying the Likely Cause of Failures Through Code Coverage Disclosure Number: IPCOM000240266D
Publication Date: 2015-Jan-20
Document File: 6 page(s) / 45K

Publishing Venue

The Prior Art Database


This invention describes a method to detect if a test failed whilst executing modified test code or product code. It also provides a system to report test failure due to modified test code, product code, or configuration/environment. And a system to optimally re-run tests

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 50% of the total text.

Page 01 of 6

A Method and System to Minimize the Cost of Test Execution by Identifying the Likely Cause of Failures Through Code Coverage

Automated tests are run with the goal of finding a defect in the software and the number of automated tests being created and run grows significantly as the size of a software project grows. When an automated test outputs a failure result, it takes significant computational and human resources to find the cause of the test failure, then make the necessary fixes to get the test running again, or to file a defect against the software product and resolve the defect. This goal of finding a defect is not met when the automation code itself fails. The reasons are numerous and include bad test code, changes to the automation framework that cause a regression in existing tests, intermittent automation timing problems (common in legacy code), and environmental or configuration problems. Such a failure can be misleading to the tester and result in wasted time as the cause of the failure needs to be tracked down to the test code or the developer code.

This invention uses code coverage to detect if a test failed while covering code that has changed, or failed before covering any code that has changed. To know whether a new change is being covered during testing, a comparison between artifacts needs to be made. Artifacts could be defined as developer code, test code, or binaries. To compare the artifacts, an artifact that contains the new changes must be stored, as well as a baseline artifact to compare to.  Once the difference is found between the two it forms the basis from which code coverage can determine if any new changes were covered. During a test run, code coverage is utilized to keep track of what areas of the artifact are being executed. This code coverage data can then be compared to the difference previously found between the baseline artifact and the artifact containing the new changes.

The comparison of code coverage to the artifact difference should then reveal whether a failure occurred during the execution of recent changes or not. This piece of information can then be used to help determine the course of action to take. The information gained from this invention helps testers save time and resources in choosing what the general problem is and what steps they should take to deal with the failure. This is particularly useful in the trend towards continuous integration where efficiency is key in making timely deliverables.

One existing method to assist developers and testers in detecting types of failures is creating signatures for bugs that are found. The log file contains the error messaging from which a regex query can be created to help identify when that particular bug appears again. This is a useful method for tracking known bugs for when and how often they


Page 02 of 6

re-appear, but it involves the expertise of the user to determine what part of the log file can identify the bug. It is also up...