Browse Prior Art Database

A Parallel Programming Environment

IP.com Disclosure Number: IPCOM000128274D
Original Publication Date: 1984-Dec-31
Included in the Prior Art Database: 2005-Sep-15
Document File: 16 page(s) / 50K

Publishing Venue

Software Patent Institute

Related People

John R. Allen: AUTHOR [+4]

Abstract

Because humans tend to think sequentially rather than concurrently, program development is most naturally done in a sequential language such as Fortran. While the programs resulting from this method of development are usually very efficient on a scalar machine, they axe often incapable of directly making effective use of parallel processors. Typically, the only language support for parallel processing is a set of very simple language prim-itives or systems calls that permit concurrent programming in Fortran. As a result, the program-mer is responsible for explicitly handling all synchronization. The problem with this approach is that concurrent programming is unnatural for many scientific programmers. Not only is writing such a program a tedious process, but it also presents many opportunities for creating bugs that are almost impossible to find-race conditions, deadlock, and programs which produce different results on the same data. This paper investigates automatic techniques for converting Fortran programs to parallel form. Using this approach, the programmer would write loops in which the parallelism could be un-covered naturally. The programming environment would take responsibility for generating ap-propriate synchronization primitives. Although automatic techniques will not, in the forseeable future, free the human programmer from thinking about parallelism, they can permit him to view it at a higher level of abstraction and hence make programming more effective.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 8% of the total text.

Page 1 of 16

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

A Parallel Programming Environment*

John R. Allen Ken Kennedy

Rice COMP TR84-8 July 1984

Department of Computer Science Rice University P.O. Box 1892 Houston, Texas 77251

*Support for this work was provided by IBM Corporation. A Parallel Programming Environment

John R. Allen Ken Kennedy

Abstract

Because humans tend to think sequentially rather than concurrently, program development is most naturally done in a sequential language such as Fortran. While the programs resulting from this method of development are usually very efficient on a scalar machine, they axe often incapable of directly making effective use of parallel processors.

Typically, the only language support for parallel processing is a set of very simple language prim-itives or systems calls that permit concurrent programming in Fortran. As a result, the program-mer is responsible for explicitly handling all synchronization. The problem with this approach is that concurrent programming is unnatural for many scientific programmers. Not only is writing such a program a tedious process, but it also presents many opportunities for creating bugs that are almost impossible to find-race conditions, deadlock, and programs which produce different results on the same data.

This paper investigates automatic techniques for converting Fortran programs to parallel form. Using this approach, the programmer would write loops in which the parallelism could be un- covered naturally. The programming environment would take responsibility for generating ap- propriate synchronization primitives.

Although automatic techniques will not, in the forseeable future, free the human programmer from thinking about parallelism, they can permit him to view it at a higher level of abstraction and hence make programming more effective.

1. Introduction

It seems clear that the next generation of supercomputers will be based upon the multiple- processor paradigm. Vector instruction sets have provided substantial improvements in the running times of suit-able programs, but we seem to be at the limits of their applicability. In order to achieve further speed-up, multiple processor systems must and will be tried. The newest Cray supercomputers offer facilities to apply multiple processors to a single job [Lars 84].

Two designs, both of the familiar shared-memory class, appear plausible for the near and medium-term future. In the first design, the cpu of a single high-speed vector computer is

Rice University Page 1 Dec 31, 1984

Page 2 of 16

A Parallel Programming Environment

replicated to produce a single shared memory system with two, four, eight, or even sixteen processors. Cray has adopted this strategy in the Cray X-MP. In the second design, many small processors, say fifty to a thousand, are interconnected to a shared memory via some sort of network. The NYU Ntracomputer [GGKM 83, Schw 80] is one example of this design.

Programming machines...