Browse Prior Art Database

Automation Interface for Software Quality Assessment

IP.com Disclosure Number: IPCOM000132003D
Original Publication Date: 2005-Nov-28
Included in the Prior Art Database: 2005-Nov-28
Document File: 4 page(s) / 59K

Publishing Venue

IBM

Abstract

Automating the testing of software functionality commonly known as Functional Verification Testing (FVT) has greatly helped increase test coverage of software products. However, the main challenge facing automation involves keeping up with code changes made by developers. As the software changes the automation framework must change to continue functioning properly. Otherwise, the automated tests will report "errors" where really the product has intentionally changed.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 38% of the total text.

Page 1 of 4

Automation Interface for Software Quality Assessment

Automating the testing of software functionality commonly known as Functional Verification Testing (FVT) has greatly helped increase test coverage of software products. However, the main challenge facing automation involves keeping up with code changes made by developers. As the software changes the automation framework must change to continue functioning properly. Otherwise, the automated tests will report "errors" where really the product has intentionally changed.

There are a variety of solutions that let FVT team members create scripts that automate their test cases, but none seem to address the need to stay in touch with changes made to the software application during the development process. Current automation solutions depend on the FVT team member to continuously update the scripts they create to reflect any code changes they may or may not be aware of.

The types of automated testcases we're considering here would be considered 'screen-scraping' applications. They are based around locating specific output elements from the application, and then taking action based on those. In a Web-based application, the testcases are typically driven based on specific markup patterns. For example, consider a Web page which presents a table view with 'next' and 'previous' buttons. An automated testcase might contain code which says "locate the anchor which contains the image whose 'alt' text is 'next page', and click that link". If the developer changes the text on that button to 'Go to next page', then the automated testcase will fail, and the testcase developer will need to spend time determining why it's failed, and then additional time fixing the testcase and validating it. Therefore the problem ensues because there's no way to provide a reliable link between the conceptual elements of the application (such as the 'next page' link in the example above) and the exact output of the application (the alt text, in the example above).

To solve the problem stated above we are proposing a new approach which better connects developers of the software application with the FVT team members charge with testing it. To do this we suggest having the integrated development environment that the developer uses contain a feature that will create what we are calling an automation interface to their code. This automation interface changes whenever the developer makes changes to their code, and would function as the liaison between the developer and the tester working on automation. The tester would use the automation interface whenever automating FVT of their software product. An example of this sort of an interface can be found in eclipse. Eclipse contains a wizard which lets you extract a Java interface from Java code. Here we are describing a more involved interface, but the idea is the same: you mark the components which go into the interface, and then the tool extracts those to an interface definit...