Browse Prior Art Database

Load dependent time-outs for test applications Disclosure Number: IPCOM000032447D
Original Publication Date: 2004-Nov-05
Included in the Prior Art Database: 2004-Nov-05
Document File: 3 page(s) / 50K

Publishing Venue



System testing often involves exercising a product using large and varying volumes of data over a series of iterations, whilst running on systems where multiple workloads are being executed in order to simulate customer like environments. As a result, not only do we expect the product to be "well behaved" in such an environment, i.e. it doesn't use all system resources to the exclusion of other programs, we also need the tests themselves to be "well behaved". This article describes an algorithm for making tests "well behaved", whilst optimising their ability to execute to completion and avoid unnecessary re-execution.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 40% of the total text.

Page 1 of 3

Load dependent time-outs for test applications

An algorithm is disclosed that enables batches of system tests to be executed with a high level of confidence that they will run to completion, whilst minimising the use of system resources, which should be available to the product under test, i.e. the tests are "well behaved". This is achieved by intermittently monitoring system performance and adjusting time-outs in the tests accordingly.

The usual approaches to achieving this are:

Making the hard coded time-outs so large that the tests are guaranteed to run to

completion, no matter what volume of data it is given:

  If a failure occurs, whilst a test is in the time-out period, it can wait for a significant amount of time before aborting: System resources are wasted, waiting for something to happen that is not going to.

  Even if a test is successful, it may sit waiting for the full time-out period before checking whether a successful result has been returned.

  The overall period of execution of a test and any batch of tests, of which it is a member, will increase.

  If batch of tests have a fixed window in which to complete, for example, if they have to be run during a particular shift, subsequent tests may not be allowed to run.

Providing a parameter to the tests, where the tester can specify the size of the


time-out, based upon their knowledge of the volumes of data that will be used:

  This requires knowledge on the part of the tester, as to how long the tests will take to run, when given a particular workload to process.

  This is often done by guessing, either resulting in aborted tests or the tester setting the time-outs so high that they have the drawbacks described above.

  Whilst the tester may have a good idea of what it will take for tests to handle a particular workload on a particular system, tests are often run on different systems, with different performance characteristics.

  The test will often abort, because the system on which it is running is performing badly due to other workload, different configuration, etc.

  The algorithm described here provides a dynamic time-out mechanism, that: Establishes an initial time-out based upon the workload that a test or batch of

tests are given

Monitors the performance of the tests and adjusts the time-out accordingly. For


instance, if a test is working, but is taking longer than expected, the time-out is increased by a calculated amount. If a test starts to run more quickly, the time-out can be decreased by a calculated amount.

The advantages of this approach are:

It is possible to write a single test program to perform a large variety of tests,

which only vary in nature in terms of the workload they are processing. The traditional approach has been to write a number of separate tests, each using a different workload and having a hard coded time-out for the specific test. As a result, maintenance costs can be reduced.

Test programs using this approach can "self-tune" themselves to the d...