Browse Prior Art Database

"An Optimal Selection Algorithm" Disclosure Number: IPCOM000128243D
Original Publication Date: 1986-Dec-31
Included in the Prior Art Database: 2005-Sep-15
Document File: 7 page(s) / 22K

Publishing Venue

Software Patent Institute

Related People

Richard Cole: AUTHOR [+3]


We give an optimal parallel algorithm for selection on the EREW PRAM. It requires a linear number of operations and [Equation ommitted] time.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 26% of the total text.

Page 1 of 7


"An Optimal Selection Algorithm"

by Richard Cole

Ultracomputer Note #97 Technical Report #209 March, 1986 This work was supported in part by NSF grant DCR-84-01633 and by an IBM faculty development award.


We give an optimal parallel algorithm for selection on the EREW PRAM. It requires a linear number of operations and

(Equation Omitted)


1. Introduction

The models of parallel computation used in this paper are the exclusive-read exclusive-write (EREW), and the concurrent-read exclusive-write (CREW) parallel random access machine (PRAM). A PRAM employs p synchronous processors all having access to a common memory. An EREW PRAM does not allow simultaneous access by more than one processor to the same memory location for read or write purposes, while a CREW allows concurrent access for reads but not for writes. See [Vi-83b] for a survey of results concerning PRAMs. Let Seq(n) be the fastest known worst-case running time of a sequential algorithm, where n is the length of the input for the problem at hand. Obviously, the best upper bound on the parallel time achievable using p processors, without improving the sequential result, is of the form O(Seq(n)lp). A parallel algorithm that achieves this running time is said to have optimal speed-up or more simply to be optimal. A primary goal in parallel computation is to design optimal algorithms that also run as fast as possible. The following notions of "fast" and "efficient" are in line with this goal. Given a parallel algorithm that runs in time T using p processors, we say that it performs (a total of) pT operations. (Recall that a theorem due to [B-74] implies that, under quite general conditions, any algorithm that performs x operations in time t can be implemented so as to achieve time O(t) using xlt processors.) Let A (resp. B) be a parallel algorithm that performs xl operations (resp. x2 operations) in time T1 (resp. time TZ) on a given input. We say that algorithm A is more efficient than algorithm B if

(Equation Omitted)

(regardless of the relation between Tl and TZ). Also, algorithm A is faster than algorithm B if

(Equation Omitted)

(regardless of the relation between xl and x2) .

New York University Page 1 Dec 31, 1986

Page 2 of 7

"An Optimal Selection Algorithm"

In view of our interest in efficiency we shall give all our results in terms of the following two parameters: operations performed and running time. To obtain the number of processors used, simply divide the operation count by the running time. This presentation has two advantages. First it provides for a comparison `at a glance' with the best sequential algorithm. Second, it may make the description of the algorithm simpler: at times it is convenient to describe the algorithm as if different numbers of processors are being used for different steps of the algorithm, and simply to count the number of operations performed. We are assured, by B...