Browse Prior Art Database

Black hole testing - Machine learning program behaviour patterns from massive automated test results

IP.com Disclosure Number: IPCOM000243987D
Publication Date: 2015-Nov-04
Document File: 7 page(s) / 78K

Publishing Venue

The IP.com Prior Art Database

Abstract

Testers must have deep knowledge of an application because they need to know what the expected behaviour is so they can verify the results. This results in much time spent learning and understanding detailed inner workings of the business logic, exception flows, etc. In this article, a new method is proposed to help testers start working faster and more effecient by utilizing machine learning technologies.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 20% of the total text.

Page 01 of 7

Black hole testing

Black hole testing

Testers must have deep knowledge of an application because they need to know what the expected behaviour is so they can verify the results. This results in much time spent learning and understanding detailed inner workings of the business logic, exception flows, etc. The knowledge is difficult to transfer to others when the tester leaves, because of the level of detail required - a new user cannot remember all the details except by experiencing them, which means running the tests.

Furthermore, depending on the size and complexity of the application, it can take weeks to design the right test cases during development, due to communication, changing requirements, and various review processes. This means that for enterprise software, or software with a long history, testing is a huge effort and is often the bottleneck in the development cycle.

Instead of manually designing test cases, with specific inputs and expected outputs, testers can define a few base test cases with various parameters, and then for each parameter, provide multiple values. Test cases can then be automatically generated by using different combinations of values for each parameter, using combinatorial testing techniques (to keep the number of test cases under control). The testers do not know ahead of time what the expected outputs are for each test case, but the appropriate output values must be recorded.

After test cases are run and the output is recorded for each test case, a massive amount of data will be available on the application's behaviour. This data can be analyzed using machine learning to find patterns in the data (relationships between inputs and outputs). Some patterns may be pre-defined - for example, if a NullPointerException is thrown, it might always be labeled as a "defect" pattern. Many patterns will not be known ahead of time.

The tester can then look at the patterns in the data to see the general behaviour of the software, and decide which patterns are valid or invalid. In general, we can consider a few types of patterns:

The same pattern occurs with a large number of test cases. There will be only a few of these, and so are easy to verify if the pattern represents valid behaviour.

The same pattern occurs with only a few test cases. These types of patterns will be more common, and will take most of the time to verify if the patterns are valid.

The same pattern occurs with only 1 test case. It is rare to have only one set of inputs produce a result that is does not fit any

pattern. Usually this will be a defect.

In future testing of the same application, the patterns can be used directly to verify test results. At that time, if any new patterns are discovered, it will be relatively fast to verify if it's a defect or if it matches the behaviour of the new feature.

Advantages over traditional approaches:


Less effort is required to understand the details of the application behaviour ahead of time. The teste...