Browse Prior Art Database

Dynamic Performance Stubs

IP.com Disclosure Number: IPCOM000171523D
Published in the IP.com Journal: Volume 8 Issue 6B (2008-06-25)
Included in the Prior Art Database: 2008-Jun-25
Document File: 3 page(s) / 55K

Publishing Venue

Siemens

Related People

Juergen Carstens: CONTACT

Abstract

In software engineering, performance testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. However, the process of software performance measuring itself follows some strict guidelines through the following steps: analysis - test - improvement - verification. Thus, a Software System can be studied for bottlenecks using several standard tools such as profiling and tracing software. Once a bottleneck has been found the performance optimizing person tries to improve the software part based only on his/her experience without any well-founded procedure. After this optimization process the software has to be tested again to see the real performance optimization gain. Using this approach, there are several disadvantages. The procedure has a strong dependency on the experience of the person realizing a performance optimization and can only roughly be used for estimations. One of the problems using this methodology is the cost-benefit analysis of improvement possibilities. An expert has to know the complete system at a very detailed level, which is hardly possible within large software projects. Another problem of this approach is that the optimization has to be done without really knowing how much of the module has to be optimized. After the optimization of some parts normally other additional performance bottlenecks delay the execution time of the component under study (CUS). Without knowing all bottlenecks in advance the improvement effort may lead to an unexpected small gain. Another problem is that the software will be optimized without really knowing the expectable improvement factor. Therefore, software is often over optimized, which leads to unstructured code, which is only roughly maintainable. Due to that fact the effort for testing also increases. Also the resource of optimization effort is not used in an efficient way. So, either effort can be saved or a higher degree of optimization can be achieved. Additionally, large software systems often use third-party software to reach the overall performance targets. Such third-party software will also have to hold against strict performance targets. Actually, it is hardly possible to validate the software and their targets.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 28% of the total text.

Page 1 of 3

Dynamic Performance Stubs

Idea: Rudolf Bauer, DE-Munich; Christian Facchi, DE- Munich; Peter Trapp, DE- Munich

In software engineering, performance testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. However, the process of software performance measuring itself follows some strict guidelines through the following steps: analysis - test - improvement - verification. Thus, a Software System can be studied for bottlenecks using several standard tools such as profiling and tracing software. Once a bottleneck has been found the performance optimizing person tries to improve the software part based only on his/her experience without any well-founded procedure. After this optimization process the software has to be tested again to see the real performance optimization gain. Using this approach, there are several disadvantages. The procedure has a strong dependency on the experience of the person realizing a performance optimization and can only roughly be used for estimations. One of the problems using this methodology is the cost-benefit analysis of improvement possibilities. An expert has to know the complete system at a very detailed level, which is hardly possible within large software projects. Another problem of this approach is that the optimization has to be done without really knowing how much of the module has to be optimized. After the optimization of some parts normally other additional performance bottlenecks delay the execution time of the component under study (CUS). Without knowing all bottlenecks in advance the improvement effort may lead to an unexpected small gain. Another problem is that the software will be optimized without really knowing the expectable improvement factor. Therefore, software is often over optimized, which leads to unstructured code, which is only roughly maintainable. Due to that fact the effort for testing also increases. Also the resource of optimization effort is not used in an efficient way. So, either effort can be saved or a higher degree of optimization can be achieved. Additionally, large software systems often use third-party software to reach the overall performance targets. Such third-party software will also have to hold against strict performance targets. Actually, it is hardly possible to validate the software and their targets.

In the following a novel procedure is presented, called the "dynamic performance stubs". This method combines performance measurements with the well-known techniques of stubbing, as used in the implementation and test phase of software development. To develop dynamic performance stubs the performance of the already exis...