Browse Prior Art Database

A System and Method for Predicting Scalability of Parallel Applications in Presence of Operating System (OS) Jitter using Trace Driven Simulation

IP.com Disclosure Number: IPCOM000200892D
Publication Date: 2010-Oct-29
Document File: 7 page(s) / 152K

Publishing Venue

The IP.com Prior Art Database

Abstract

A method and system for predicting scalability of parallel applications in presence of Operating System (OS) jitter is disclosed. The OS jitters are emulated by using trace driven simulation. Thereafter, multiple Messaging Passing Interface (MPI) tasks are simulated by choosing indices at one or more points in the trace. Once the MPI tasks are done with their computations, a slowdown is calculated by comparing total consumed cycles to the compute cycles.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 36% of the total text.

Page 01 of 7

A System and Method for Predicting Scalability of Parallel Applications in Presence of

Operating System (OS) Jitter using Trace Driven Simulation

Disclosed is a method and system for predicting scalability of parallel applications in presence of Operating Systems (OS)

simulate effects of jitter that is characteristic of a given OS using a given trace. Thereafter, scalability up to any arbitrarily large number of parallel applications is predicted in presence of jitter.

The method involves collecting a jitter trace from a core/set of cores on a single node.

A single node timestamp register reader benchmark (referred as a TraceCollector) is used for collecting the jitter trace. The TraceCollector is run on the single node that is running the OS with the specific configuration under which the scalability of parallel applications is required to be predicted. The TraceCollector has a tight loop that reads
a timestamp counter register repeatedly and finds out timestamp deltas between successive readings. It then compares each timestamp delta with a minimum timestamp delta (tmin) observed on that platform to decide whether a timestamp delta constitutes a jitter instance or not. This is used to create a user level jitter distribution as well as a user level jitter trace.

When parallel tasks run on the same physical node in a system, they typically communicate over shared memory whereas when they run on different physical nodes, they communicate over the network using the high speed interconnect. In order to simulate a tree based synchronization point (for example, a barrier) across parallel tasks, one or more of one hop latency, network latency, MPI Stack Latency and Shared Memory Latency are measured. In order to measure the latencies, two MPI tasks are run either on two cores of the same node (for measuring shared memory access latency or the MPI stack latency) or on two different physical nodes (for measuring one hope network latency) and MPI SEND and RECV messages of varying sizes are passed between these two MPI tasks and finally an average is calculated. The methodology is illustrated in Fig. 1.

1

jitter. A jitter simulation framework is used to


Page 02 of 7

(This page contains 00 pictures or other non-text object)

(This page contains 01 pictures or other non-text object)

Figure 1

MPI Stack latency is divided by half to get an estimate of SEND and RECV latencies. MPI tasks are allocated sequentially among the cores followed by nodes.

Different portions of the trace can be thought of as multiple traces collected on different nodes and hence can be used to model jitter experienced by multiple tasks.

Choosing the point in the jitter trace from where the different tasks start executing is an important decision and it can have interesting ramifications. In a cluster that has unsynchronized jitter, different kinds of jitter activities will hit each node at different points in time. On the other hand, in a cluster that has employed a mechan...