Browse Prior Art Database

Pipelined Data Storage Fetches

IP.com Disclosure Number: IPCOM000060455D
Original Publication Date: 1986-Apr-01
Included in the Prior Art Database: 2005-Mar-08
Document File: 3 page(s) / 73K

Publishing Venue

IBM

Related People

Awsienko, O: AUTHOR [+2]

Abstract

A technique is described whereby an algorithm provides a pipelined single cycle data storage fetch without the need for a data cache. In reduced instruction set computer architecture, instruction operands are usually register based with simple load and store instructions to provide the flow of operands between memory and the general purpose registers. Consequently, utilization of load/store instructions is very high and performance optimization of the instructions is essential. As a result, data cache memories are used. However, cache memories are costly and the control circuitry can be complex. Therefore, the use of pipelining was introduced to perform data storage operations at a single cycle rate.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 46% of the total text.

Page 1 of 3

Pipelined Data Storage Fetches

A technique is described whereby an algorithm provides a pipelined single cycle data storage fetch without the need for a data cache. In reduced instruction set computer architecture, instruction operands are usually register based with simple load and store instructions to provide the flow of operands between memory and the general purpose registers. Consequently, utilization of load/store instructions is very high and performance optimization of the instructions is essential. As a result, data cache memories are used. However, cache memories are costly and the control circuitry can be complex. Therefore, the use of pipelining was introduced to perform data storage operations at a single cycle rate. In utilizing pipelining architecture, the destination of the data returning from memory must be resolved many cycles after the data request, usually after the processing unit has executed a string of additional instructions. Also, the execution of instructions between the fetch request and when data is received, must be conditioned by the executing instructions which do not require the data. This is time consuming and requires an extensive amount of control circuitry to provide the proper operational sequence. The concept presented herein allows pipelining of data storage fetch requests to be achieved at a single cycle fetch request rate through the implementation of a program algorithm and with a minimum of hardware. The use of the algorithm is based on two assumptions. First, the data for the fetch request will be returned by the storage control unit in the sequential order in which the request was made. Second, the instruction execution can, in most cases, proceed for several cycles without requiring the requested data. As a result, the data may be retained in a first-in, first-out (FIFO) stack and the instruction execution can continue, since the outstanding data is not needed for several cycles. The four-layer FIFO stack 10, as shown in Fig. 1, which is optimized to the memory data storage fetch rate, is used to store the destination of the outstanding data. It is ten bits wide, where five bits are used for storage of the address in the general purpose register, and four bits store the information needed to perform incoming data alignment. A flag bit is used whenever the stack position contains a valid entry. The flag is reset at all other times. Stack control 11 determines the condition requiring a push, i.e., execution of an outstanding load instruction, or a pop, i.e., outstanding data return from memory. Data pending storage or bus unit (DPS/BU) 12 is a latch which stores the outstanding data queued on the stack. Data load storage or bus unit (DLS/BU) 13 stores the data needed to execute the current instruction. Source operand comparator 14 provides the comparison of the general purpose register source address with all valid entries on the stack to determine if execution of the current instructi...