Browse Prior Art Database

Performance Prototyping - Generating and Simulating a distributed IT-System from UML models Disclosure Number: IPCOM000011984D
Original Publication Date: 2003-Apr-25
Included in the Prior Art Database: 2003-Apr-25

Publishing Venue


Related People

Other Related People:


Larger IT-Systems or products, particularly if distributed and networked often have complex interactions between the various hardware (HW) entities (hosts, network nodes and links, peripherals), the different layers of operating system, execution environments and server processes (like web, servlet, application or database-server) which in this paper we will summarily call middleware (MW), and finally the main behavioural software components implementing the business logic of the system (SW). With the trend to standardized off-the-shelf products with standard interfaces and protocols, the division between the teams responsible for “SW-development” and “HW planning, installing configuration and operation” tends to be somewhere within the middleware layers, where the “SW-team” focuses on functionality and interfaces, the “HW-team” on configuration issues. For most large systems (e.g. ERP systems, intra/internet-portals, online shops, information and control systems…), performance is a central criteria and critical success factor. With increasing expectations of the users to perceived performance, an unsatisfactory performance might - and frequently does - endanger the system’s/product’s/project’s success irrespective of functionality and design. Since these systems are often business critical and/or highly image critical, insufficient performance can incur heavy costs (e.g compensation, penalties, superfluous hardware, loss of market shares and value), delay the going-live, and reduce the system’s benefit (e.g. through lack of user acceptance and retention, sub-optimal decisions based on out-of-date information). Performance problems can derive from a variety of sources, including sub-optimal configuration, insufficient computational power, inefficient implementations of individual modules and design flaws. The earlier lie within the responsibility of the HW-team and can be rectified by tuning or – although more expensive – by additional HW. The later are not only more difficult to detect, they are also caused much earlier in the project and are therefore far more difficult to correct in time if detected towards the release date – apart from the much higher cost. A reason behind the numerous performance failures of IT-Projects (and the ensuing mutual accusations) is the far-too-late assessment of performance, which derives partly from lack of performance-awareness, partly from the restrictions found in the predictive methods that could be used to measure and control progress in terms of performance goals. Ideally, for large and performance-critical systems, there should be the role of an overall “performance engineer”, who gathers performance assumptions and requirements, predicts overall final performance based on the current implementation progress, coordinates and mediates between the conflicting interests of HW and SW teams and executes in-development and pre-release load-test to substantiate development and release decisions. We group the methods predicting the live performance coarsely into “benchmarking”, “simulation”, “prototyping” and “load-testing” (ref. Fig. 1):