Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method and Process for Automatically Generating Development and Test Estimates From Early Text Descriptions of the Product

IP.com Disclosure Number: IPCOM000125888D
Original Publication Date: 2005-Jun-20
Included in the Prior Art Database: 2005-Jun-20
Document File: 2 page(s) / 36K

Publishing Venue

IBM

Abstract

This tool takes a project's early text documents and identifies coding and test case candidates. It identifies certain coding and test attributes from the documents' vocabluary and constructs. It then lists the "Best Practice Coverage Points" for coding and testing specific application entities like : tool bars, dialog screens, input/output peripherals, data transmission and other similarly categorized components. At that point, it calculates/estimates how many planned components and test cases you will need to properly cover the product. This is only an estimate, but it will be based on "Best Practice Coverage Points" and the number and types of changes you plan to make to your application.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 2

Method and Process for Automatically Generating Development and Test Estimates From Early Text Descriptions of the Product

Different teams may design and implement their applications in different ways; but the majority still start with text documentation. This invention, scans the early text, created in MS Word, Wordpad, or any other text-based editor, for design and test vocabulary and constructs. The vocabulary and constructs found are then interpreted into the appropriate code or test category/entity. Examples of a coded/test entities/categories are dialogues, fields, input/output methods, menus,file transfers, etc.

Once the categories are interpreted from the text, a set of repeatable "cookbook" of development and test activities are calculated for per each type of code/test entity found and highlighted in the starting document. For instance -- dialogues have certain attributes and things that we already know that need to be coded and tested.. Once we've identified that a dialogue is required several Best Practices coding and use cases can be automatically generated for that dialogue.

The same process is used to quickly generate design models, for the next step of the development and test cycle.. Although the estimates generated will be modified throughout the development cycle, they are more likely to be more complete and solid than those generated manually.

*Note -- early estimates can only be as accurate and detailed as the starting text documentation.

Example from a real Business Use Case documents that was created in the early stages of PerformanceTester 6.1 in Atlantic:

1.The LT test is saved.
2. The user browses to the LT test previously generated using the Test Navigator and opens the test suite

Save and Open would be triggers to generate input/output type code and test cases for these activities Browse would be a trigger to denote that a tree or directory structure UI/Dialogue is present. Therefore we would generate tree code and test cases for this activity.

Platforms, environment, operating systems names and languages wou...