Browse Prior Art Database

Intelligent root-cause and defect analysis

IP.com Disclosure Number: IPCOM000243596D
Publication Date: 2015-Oct-05
Document File: 3 page(s) / 117K

Publishing Venue

The IP.com Prior Art Database

Abstract

This article proposed a noval technique for analysing test logs using machine learning and user feedback. It requires no upfront cost to bootsrap the system before deploying. User feedback is then solicited for improving the internal resources used for mining and understanding insight in test logs. It uses quantified uncertainty to provide ranking for a set of elicited root-causes, so that it reduced user effort on searching the results.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 43% of the total text.

Page 01 of 3

Intelligent root-cause and defect analysis

Machine learning has been widely studied in artificial intelligence. Many learning techniques were developed and applied in problem solving of many fields. At the same time, test automation has widely been adopted by software projects to release manual efforts in test failure and root cause analysis. The bottleneck of such process is that test automation still requires a significant amount of manual effort on analysing pages of logs recorded during the course of the process. Test automation scenarios are usually composed of a set of software components, such as database management system, operating system or network management tools. Majority of such scenarios test the interaction between the components. At the end of testing such software component, each component will generate its own log, collectively as all logs to be returned for analysing. Analysing the logs is a time consuming task, and requires knowledge on different types of software. Here, this publication describes a machine learning system for analysing logs generated from software test automation tools with incremental improvement driven by user feedback. The system does not require up-front manual effort to bootstrap the knowledge base supporting the analysis process. Feedback can be provided in a pay-as-you-go style as a way to teach the system, so that precision of root cause and defect analysis is improved corresponding to demand of test targets. Furthermore, experience gained from analysing one test can be reused to analysing future test scenarios. Such experience is described, archived and shared among test engineers.

    According to the system herein, logs generated from software test automation using machine learning techniques are analysed. The system herein can significantly reduce manual up-front cost. In addition, knowledge learned from previous test scenarios can be reusable to future scenarios. Finally, it improves the reliability of root cause analysis from interaction with users through feedback. The system comprises text mining of textual terms available in test logs, a knowledge base that holds pass-and-failure rules to support reasoning of root cause of test failures, a ranking mechanism to rank potential causes derived from rule-based reasoning, as well as a component to collect feedback from user of the test automation, to automatically improve rules in the knowledge and statistics used to rank the cause list.

    The test environment contains, according to Figure 1, (1) one or a set of software components to be tested, (2) the test automation tools, (3) a repository or a database that holds test logs, (4) the test log analysis software component, (5) a view that presents the test analysis result to the user, and from which user feedback is solicited. Each of the software components can comprise code, a database management system, among others. The components interact with each other. One or several of them gen...