Browse Prior Art Database

Automated software test results analysis using archived results

IP.com Disclosure Number: IPCOM000231145D
Publication Date: 2013-Sep-30
Document File: 2 page(s) / 38K

Publishing Venue

The IP.com Prior Art Database

Abstract

We propose automatic analysis of a bug detected by automated testing using archived results of all previous testing runs and integration with the bug reporting system.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 2

Automated software test results analysis using archived results

Automated software testing tools are a fundamental tool for developers, but most automated testing systems are designed around the problems of running tests and presenting their results; they do not provide automatic analysis or suggestions for fixes. All available automated software testing tools today focus on running tests reliably and efficiently, in parallel or in a distributed form, sometimes continually; and they provide little or no support for the remediation of detected errors.

    Presentation of data is confined to what test has failed right now; yet it is possible to have errors which are intermittent, which crop up again and again over

time but for which the frequency is sufficiently low that the human developer misses the pattern. Such errors may be difficult or impossible to spot as a result. Other errors may strongly resemble past, solved errors; but unless the developer remembers them themselves, the system offers no reminder of this.

    Our core proposal is automatic analysis of a current test run's results in comparison to a current baseline and an archive of past baselines and past known errors.

    In the normal course of use, a developer would trigger an automated test run against their project. The test run would complete and our idea would read the test output and compare it against an expected baseline result. If it did not match this result, our idea would examine the differences and determine if they represented a failing test in the baseline now running correctly, in which case we do nothing; or if it represents a new failure in comparison to that baseline. If this is the case, we would

then examine the past archive of errors, and find the closest available match to this new error and present this data to the developer, along with an estimate for how likely we think the match is. Thus, instead of a mere error message, the developer receives the error message, the closest error we've yet seen to this new error, and possibly a link to the bug report that may well detail how it was fixed last time. This has the potential to speed up diagnosis and repair enormously.

    All automated testing systems break testing down in to a series of individual tests, and some group these together into groups. The exact group and test that fails, as well as any output from that test, can act as a fingerprint of a known problem. A fuzzy match against this data using - for example - hidden markov models or other known techniques, would give the most likely candidate from all

known errors to explain this failure and would also provide an estimate of how close such a match was. The amount of time since an er...