Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

A method and system to intelligently evolve test cases

IP.com Disclosure Number: IPCOM000240119D
Publication Date: 2015-Jan-05
Document File: 6 page(s) / 93K

Publishing Venue

The IP.com Prior Art Database

Abstract

This describes a system that can help testers identify test results from their automated test cases better and adapt them to fit the latest version of product if necessary.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 6

A method and system to intelligently evolve test cases

The process of software development normally requires a large amount of test cases to verify its robustness. But those test cases -- no matter they are automated programs or manually scripts -- can not evolve themself in the progress of development, which may cause a lot of invalid test cases after certain cycles of development.

Another aspect to test cases is that authors of those cases are normally not who is actually running them. In world-wide software company, they may even not from the same demographic, which makes the test cases even harder to understand. Sometimes a test case would report a false alarm which is unable to be identified by testers who is unfamiliar with it.

By hooking into existing test cases and software lifecycle management tools, we are able to help testers identify root cause of test results and even create updated test cases suggestions that fit for the current status of product. This is done by analysing the code changes and differences between product development iterations in the SCM tools and the running history from previous records .

This invention can be applied to any software lifecycle management tool.

USE CASE

USE CASE:

:

At the beginning, a tester is about to initiate an execution of his test buckets. System will take a snapshot on his execution by recording related infomation including: product build level, test buckets build level, environment infomation, arguments imported and mark the execution result status on it. (failed or successful)

Use case 1:

All tests passed. System will mark the execution result a success. If this is not a re-run, nothing more needs to be done. Otherwise, if this is a re-run of a previous test execution, testers needs to link their previous failure test result in the system.

Use case 2:

Some part or all of test buckets failed. System will mark the execution result as failed. And testers have a clear clue on...