Browse Prior Art Database

Super Von Neumann Computer

IP.com Disclosure Number: IPCOM000060769D
Original Publication Date: 1986-May-01
Included in the Prior Art Database: 2005-Mar-09
Document File: 2 page(s) / 14K

Publishing Venue

IBM

Related People

Folberth, OG: AUTHOR

Abstract

Present general-purpose computers differ vastly in hardware and performance, but they are invariably based on the same basic design principle, namely, the von Neumann processor. Such computers are sequential. They perform one operation at a time, using a single processing element, a sequential centralized control unit, low-level sequential machine language, and a linearly addressed fixed-width memory. Their sequential operation limits the performance of such computers. Further improvements in performance might be achieved by extended parallel processing: instead of using one computer for one task at a time, the task is divided into parts and assigned to a plurality of computing units operating in parallel (non-von Neumann computers). However, parallel processing has its unique problems and limitations.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 1 of 2

Super Von Neumann Computer

Present general-purpose computers differ vastly in hardware and performance, but they are invariably based on the same basic design principle, namely, the von Neumann processor. Such computers are sequential. They perform one operation at a time, using a single processing element, a sequential centralized control unit, low-level sequential machine language, and a linearly addressed fixed-width memory. Their sequential operation limits the performance of such computers. Further improvements in performance might be achieved by extended parallel processing: instead of using one computer for one task at a time, the task is divided into parts and assigned to a plurality of computing units operating in parallel (non-von Neumann computers). However, parallel processing has its unique problems and limitations. While many aspects of such problems are not as yet fully understood, agreement exists about one general dilemma of highly parallel processing: in order to achieve generality, acomplex and costly hardware interconnection scheme is required. The long interconnection lines and the large overhead of such arrangements are, however, counterproductive with regard to the desired high performance. Therefore, little or nothing is gained from this philosophy. On the other hand, highly efficient parallel hardware with few interconnections and little control overhead (such as "systolic arrays") can be constructed for "special" computing tasks. However, by proceeding in that way, the usually highly desirable "generality" is to a large extent - or even totally - lost. This dilemma can be expressed in general terms: high-efficiency parallel processing is coupled with limited generality, and, vice versa, high generality is obtainable only at a low degree of parallelism and thus low performance. To overcome this dilemma, it is proposed that flexible special hardware be provided which can be reconfigured dynamically to be optimally suited for the calculations to be performed. Such hardware could consist, for example, of reconfigurable systolic arrays using LSSD (Level Sensitive Scan Design) techniques. Reconfiguration could be effected dynamically such that a sequenc...