Browse Prior Art Database

Analysis of Asynchronous Multiprocessor Algorithms with Applications to Sorting

IP.com Disclosure Number: IPCOM000147948D
Original Publication Date: 1977-Jul-31
Included in the Prior Art Database: 2007-Mar-28

Publishing Venue

Software Patent Institute

Related People

Robinson, John T.: AUTHOR [+2]

Abstract

John T. Robinson Department of Computer Science Carnegie-Mellon UniversityPittsburgh, Pennsylvarria 15213 July 1977 Abstract Efficient algorithms for asynchronous multiprocessor systems must achievea balance between low process communication and high adaptability to variations in process speed. Algorithms which employ problem decomposition can be classified as static and dynamic. Static and dynamic. algorithms are particularly suited for low process communication and high adaptability, respectively. In order to find the "best" method, something about mean execution times must be known. Techniques for the analysis of the mean execution time are developed for each type of algorithm, ineluding , applications of order statistics and queueing theory. These techniques are applied in detail to (1) static generalizations of quicksort, (2) static generalizations of merge sort, and (3) a dynamic generalization of quicksort. This research was supported in part by the National Science Foundation under grant MCS75-222-55 and the Office of Naval Researt:h under contract N00014-764-0370. NR044-422.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 12% of the total text.

Page 1 of 20

Analysis of Asynchronous Multiprocessor Algorithms with Applications to Sorting

John T. Robinson

Department of Computer Science Carnegie-Mellon University
Pittsburgh, Pennsylvarria 15213

July 1977

Abstract

  - Efficient algorithms for asynchronous multiprocessor systems must achieve
a balance between low process communication and high adaptability to variations in process speed. Algorithms which employ problem decomposition can be classified as static and dynamic. Static and dynamic. algorithms are particularly suited for low process communication and high adaptability, respectively. In order to find the "best" method, something about mean execution times must be known. Techniques for the analysis of the mean execution time are developed for each type of algorithm, ineluding ,

applications of order statistics and queueing theory. These techniques are applied in detail to (1) static generalizations of quicksort, (2) static generalizations of merge sort, and (3) a dynamic generalization of quicksort.

This research was supported in part by the National Science Foundation under grant MCS75-222-55 and the Office of Naval Researt:h under contract N00014-764-0370.
NR044-422.

[This page contains 1 picture or other non-text object]

Page 2 of 20

[This page contains 1 picture or other non-text object]

Page 3 of 20

1 - Introduction

     We consider the design and analysis of k-process algorithms for an asynchronous multiprocessor system, which consists of k or more processors sharing a common memory by means of a switch or connecting network. In addition there is an operating system providing such functions as process creation, scheduling of processes, allocation of memory, synchronization, etc. A real example of such a system is described in 171, and a general discussion of asynchronous parallel algorithms is presented in [5]. A k-process algorithm will be presented by giving the procedure each process executes when assigned a processor. We will assume that a processor is always available for any of the k processes that is, runnable.

     Given a task we wish to execute on such a system, in order to exploit parallelism we must decompose the task into a set of subtasks. Some subtasks cannot begin until others which they depend upon finish; this establishes a precedence relation between tasks. Inefficiency in an algorithm arises when some process must spend too much time waiting for other processes to complete subtasks, and again towards the end of execution when there are fewer than k subtasks. Attempts to remedy this by "evenly" dividing the original task are hopeless, since task execution time will vary due to variations in the input, the effects of other users, properties of the operating system, processor-memory interference, and many other causes. Any efficient algorithm must adapt to these variations. However, this adaptation is ex...