Browse Prior Art Database

Method for test scenario confidence level calculation Disclosure Number: IPCOM000243543D
Publication Date: 2015-Sep-30
Document File: 2 page(s) / 69K

Publishing Venue

The Prior Art Database


Nowadays, when most of manual testing has been replaced with automation the problem is not test automation but maintenance of already automated scenarios. The effort needed for maintanence is significant and it is important to find and handle inefficient scenarios. In that papper we present a simple method for automated test scenario confidence level calculation. Thanks to confidence value it became clear if particular scenario requires modification or even removal from test suite.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 2

Method for test scenario confidence level calculation

When automated test fails it means few things:
1. there is a defect in software under test
2. test is broken
3. environment used for test is broken

With thousands of tests running on every build number of failed tests is enormous. The question is which of failed test should be reviewed in first place? (we want to find software defects in first place). Which failing tests should be removed from test suite since are unreliable? We propose a method / formula for confidence level calculation. The value of confidence level indicates the importance of failed test scenarios and order of investigation.

false positive - defect in test or environment but not in tested product
item - task or defect created in work items/ source repository (for example Rational

Team Concert)

RTC - Rational Team Concert - work items and source repository
Item Summary - short description containing package.scenario.testcase name Item Status - information if item is RESOLVED or UNRESOLVED
Item Resolution - information if item is RESOLVED as FIXED, FIXED UPSTREAM,

DUPLICATE, and so on.

Item Creation date, Resolution date - creation date and resolution date

1. When test fails system automatically open item (task/defect) in code/item repository (for example RTC)

2. Component of system analyses historical data - previously opened tasks/defects and calculates confidence level according to below formula

       confidence level according to below formula
3. The confidence level value is written back to items opened in step 1 based on package.scenario.testcase name (extra fields in item called "Confidence Level". Based on that field value it is easy to sort items awaiting for investigation (in first place we can take care of failed tests with high co...