Surety is performing system maintenance this weekend. Electronic date stamps on new Prior Art Database disclosures may be delayed.
Browse Prior Art Database

Method and system to extract test-cases dependencies

IP.com Disclosure Number: IPCOM000220019D
Publication Date: 2012-Jul-18
Document File: 3 page(s) / 52K

Publishing Venue

The IP.com Prior Art Database


Many test cases suites, especially when used to perform Build Verification Test or Component Verification Test, are mainly based on product APIs, and leverage those APIs to perform a comprehensive test. Here is presented a method able to identify and create dependencies between the test cases. These dependencies are then used to help deciding which tests we have to focus on when a product failure is detected, and which ones can be analyzed later on because it is very likely that they share a common root cause with the former tests or are strictly related to the already analyzed failures.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 47% of the total text.

Page 01 of 3

Method and system to extract test -cases dependencies

During the life cycle of a software product, new functions are developed and added to the product from release to release. During the development phase of a release new functions are added from driver to driver (or from sprint to sprint if the agile methodology is used).

Modern software development methodologies make large use of test automation procedures that are developed in parallel with the current software product. When a new function is developed in the product, a new test case is also developed in order to test it. The test automation suite will therefore tend to grow and reach a high degree of complexity, as the number of test cases increases over time.

A test automation suite is composed of several test cases that exercise the product by mean of APIs in order to determine if the product response is the expected one. When a given product function is not working properly (because a bad code change has been entered in the source stream), all the test cases that have a dependency with this function will fail. It is common for a single product malfunction, to lead to many test case failures, implying a difficult test result evaluation.

The proposed solution provides a dynamic test case dependency resolver that is able to provide valuable information when analyzing test case failures. Many test cases suites, especially when used to perform BVT (Build Verification Test) or CVT (Compliance Verification Test), are mainly based on product APIs, and leverage those APIs to perform a comprehensive test.

Multiple test case report analysis methods rely on the dependency information which many times need to be manually provided by the user in order to benefit from the advantages of such methods. Obtaining this information from the test case definition is time saving in terms of effort and less error prone.

The final task of this solution is to realize a hierarchical tree of test cases, providing a complete view of the dependencies between the scenarios, in a way that the user can easily use to detect if a failure in a test case is a new problem or if, probably, the issue was already identified analyzing another test case failure. That can be determined considering that the two test cases are using the same set of APIs calls, and performing multiple considerations on those ties. In particular, the relationship related to those common APIs calls can be analyzed from a static point of view as well as a dynamic one. The former takes into account similar API calls or similar blocks of APIs calls. The latter evaluates the runtime similarity of the test cases.

1. A first step consists in identifying an API subset that contains only those APIs that are the most relevant for the following analysis. Once this subset has been identified by the user, those APIs will be taken in account to create the test cases dependencies. This phase can be named the "Filter phase".

2. Then a second phase of "...