Browse Prior Art Database

Performance testing of cloud-based applications based on constant schedule and best result analysis

IP.com Disclosure Number: IPCOM000250089D
Publication Date: 2017-May-31
Document File: 1 page(s) / 14K

Publishing Venue

The IP.com Prior Art Database

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

TITLE: Performance testing of cloud-based applications based on constant schedule and best result analysis ABSTRACT

Performance testing usually requires a stable environment with dedicated hardware resources that are not shared with other workloads, to produce valid and repeatable results. This can be a problem in virtualized environments (like cloud environment) with little control over underlying resources. A simple and low-cost approach is presented to mitigate the issue, which is especially well-suited for cloud application development testing. Performing tests in non-stable environment introduces noise and/or false information in the results as delays are not always caused by code execution path but e.g. network or CPU contention with other tasks. Most solutions of this problem rely on statistical analysis of multiple runs of the same test. The tests are run multiple times, e.g. 100 runs, execution time is measured, standard metrics like mean, median, avg, stddev, 95% percentile are calculated on all results. Because the performance on a cloud-base application can be fluctuating, the mean or average may not really reflect the true performance. To improve the performance metric quality for the baseline, the different approach described below could give better results. Better performance metric quality for the baseline, can be calculated by: - taking the average of best, usually 3 or 5 results, or just the best result - using constant performance tests schedule Proposed solution for improved performance baseline calculation: 1) Do multip...