Browse Prior Art Database

Test Case Generation Driven by Generated Metadata from Semantic Analysis of Language/Technology Documentation

IP.com Disclosure Number: IPCOM000241622D
Publication Date: 2015-May-18
Document File: 4 page(s) / 38K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed are a method and system for automated test case generation based on semantic analysis of coding documentation comments. The solution enables the testing tool to analyze the Javadoc of new or modified code and then automatically generate the test cases based on the semantic analysis.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 38% of the total text.

Page 01 of 4

Test Case Generation Driven by Generated Metadata from Semantic Analysis of Language/Technology Documentation

In a typical software development cycle, developers add new code or modify existing code for the new version of the product. The code changes are delivered to a code repository and then the testing team runs a suite of test cases against the new code. Existing test tools such as code coverage tools, static analysis tools, and dynamic analysis tools are also run for the code changes.

Despite thorough testing, the same code can manifest with various software bugs when it goes to the customer. This is due to gaps left in test cases by the existing testing tools. A communication gap exists between the developer and the tester, which leaves some bugs not caught in the testing phase of the project. Many times, a particular method/function has some exceptional condition or behavior that the tester does not find because the tester does not know about such a behavior of a method or function; exceptional behavior is not always communicated to the tester.

Current tools for the automatic generation of test cases take into account the arguments that are passed to the method/function and the associated return type; however, these available automated test case generation techniques do not convey to the tester critical method behavior information, which is part of the method logic.

Thus, the test cases that are automatically generated do not take into account the exceptional conditions/behaviors in the business logic of the method/function. This results in an oversight of critical test cases during testing and an increased probability of bugs going into the field.

A method is needed to convey information about exceptional conditions and/or behaviors to the testing tool, and ensure testers address all cases.

Prior art offers automatic generation of test cases; however, these approaches do not cover the analysis of Javadoc* to identify any gaps in test coverage.

The novel contribution is a method and system for automated test case generation based on semantic analysis of coding documentation comments. With the proposed new solution, testing tools can generate the test cases after analyzing the semantics of @Test tag in Javadoc. The solution enables the testing tool to analyze the Javadoc of new or modified code and then automatically generate the test cases based on the semantic analysis.

Algorithm for the method for input parameter:

1. Get the input parameters of the @Test tag parsed form the Javadoc given in the method into a String array[comment]. Comment is the statement specified after the @Test tag.

2. For each such input parameter, get the corresponding comment in the same array[comment]. Now, the comment contains the comment string for tests to be

1


Page 02 of 4

generated.

3. Semantic_Analysis_Engine parses and executes each element in the String array 4. Semantic_Analysis_Engine checks for any conditions involved in the input para...