Browse Prior Art Database

Processing Data from Automated Test Cases that have an Hierarchical Organization

IP.com Disclosure Number: IPCOM000109613D
Original Publication Date: 1992-Sep-01
Included in the Prior Art Database: 2005-Mar-24
Document File: 5 page(s) / 239K

Publishing Venue

IBM

Related People

Carpenter, ER: AUTHOR [+5]

Abstract

Disclosed is a method for processing automated test results when the test cases are organized in an hierarchical manner. This method yields results that are not skewed by unexpected occurrences during the test cases' execution and can be used in statistical analyses to compare the results of various executions.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 31% of the total text.

Processing Data from Automated Test Cases that have an Hierarchical Organization

       Disclosed is a method for processing automated test
results when the test cases are organized in an hierarchical manner.
This method yields results that are not skewed by unexpected
occurrences during the test cases' execution and can be used in
statistical analyses to compare the results of various executions.

      A new testing model has been developed to emulate more closely
the working environment of IBM's AIX* customers.

      Research was performed on the types of environments IBM
customers build, leading to the definition of several "environment
scenarios" (engineering, scientific research, commercial, and
financial).  For each environment scenario, a network topology was
designed and each machine in the network was assigned a "machine
scenario" to be executed on it.  Each machine scenario was comprised
of several "person scenarios," which are executed concurrently to
emulate the activity generated by customers (application programmers,
secretaries, cashiers, etc.) using that machine.  Finally, several
"task scenarios" were written for each person scenario.  Task
scenarios perform a set of steps that a customer would have to
perform to accomplish a specific piece of work (compiling a program,
sending mail, performing a transaction, etc.).

      In addition, a new test driver was created to execute the
multiple levels of scenarios for each environment.  This driver
causes each person scenario's list of task scenarios to be executed
on the appropriate machine in a loop for a specified period of time
and records in a central journal file the result (pass or fail) of
each task.

      The problem that remained was how to calculate the "success
rate" of each environment--a numerical value for each execution of
the environment scenario--that would have the following properties:
- Result of each task scenario is considered equally
      Each task scenario represents something an IBM customer wants
to do on an IBM computer. Each task scenario that cannot be performed
correctly should have an equal chance of affecting the success rate.
- Overall result cannot be skewed by spurious conditions
      The success rate should not be affected by oddities introduced
by the fact that this is really a simulation of a customer's
environment.
- Results of separate executions can be meaningfully compared
      The success rate calculated for one execution of an environment
scenario should be able to be compared to success rates from other
executions. By doing so, one should be able to derive conclusions
about the rate of improvement (or lack thereof) within the
environment scenario, and thus of the quality of the version of the
AIX operating system being tested.
Solution

      The program written to calculate success rates is called
cetxlate.  The basic syntax of the program is:
           cetxlate (-t scenar...