Browse Prior Art Database

Pipelining in Floating-Point Processors

IP.com Disclosure Number: IPCOM000062544D
Original Publication Date: 1986-Dec-01
Included in the Prior Art Database: 2005-Mar-09
Document File: 4 page(s) / 50K

Publishing Venue

IBM

Related People

Hornung, LM: AUTHOR [+2]

Abstract

A method of boosting system performance in low-cost desk-top computers by providing concurrent execution of multiple floating-point operations is described. In the desk-top computer market, there is increasing application for floating-point processors (FPPs) which support general-purpose microprocessors. These LSI (large-scale integration) components, while low in cost, provide the capability of adding, subtracting, multiplying, dividing, converting, etc., floating-point numbers (numbers defined using scientific notation such as 52 x 104) at very high rates. Certain applications such as graphics (static and dynamic frame), image, fast fourier transforms, etc., require this numeric performance for acceptable operation. However, the design of these low-cost LSIs is such that only one numeric operation at a time can be executed.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 4

Pipelining in Floating-Point Processors

A method of boosting system performance in low-cost desk-top computers by providing concurrent execution of multiple floating-point operations is described. In the desk-top computer market, there is increasing application for floating-point processors (FPPs) which support general-purpose microprocessors. These LSI (large-scale integration) components, while low in cost, provide the capability of adding, subtracting, multiplying, dividing, converting, etc., floating-point numbers (numbers defined using scientific notation such as 52 x 104) at very high rates. Certain applications such as graphics (static and dynamic frame), image, fast fourier transforms, etc., require this numeric performance for acceptable operation. However, the design of these low-cost LSIs is such that only one numeric operation at a time can be executed. In many cases, the host microprocessor is idle during the floating-point operation because the system architecture does not provide a recovery mechanism in the event that the floating-point operation is unsuccessful. This can occur due to numeric anomalies (such as "divide-by- zero") or system errors. The following definitions are employed in describing the improved arrangement. 1. Operand = floating point number

2. Operator = numeric transform such as +, -, x, %

3. Floating-Point (FP) Operation = the process of

applying an operator to one or more operands.

4. Pipelining = the technique where an operation is

started before a preceding sequential operation is

completed. This results in performance improvements

when properly implemented.

5. Floating-Point Operation Phases - Each operation

consists of three phases as follows:

a) Instruction Phase - Operators and Operands are

transferred from the system memory to the FP

subsystem.

b) Execution Phase - The FPP performs the specified

calculation.

c) Reply Phase - Feedback data transferred to the

system processor from FPP. Specifics included

are as follows:

. Positive Acknowledge - indicates instruction

was received.

. Error Status - indicates if abnormal

condition was detected in previous

instruction or

execution.

. Result - the reply includes a numeric result

if the instruction was a read or input

instruction

from the programmer viewpoint. Note that the three phases must occur in the order specified (relative to the start only), but they do not have to be continuous. Reference to Fig. 1

1

Page 2 of 4

which is the system block diagram will aid in understanding the listed definitions. The normal method of pipelining floating-point operations is such that the execution of one operation occurs simultaneously with the reply and instruction phases of adjacent operations. The result is improved floating-point performance and decoupling of the host microprocessor during the FPP execution phase, thereby allowing additional instruction processing by the host. In the case of operation errors, a method of recovery is used which is simil...