Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Local Branching

IP.com Disclosure Number: IPCOM000044534D
Original Publication Date: 1984-Dec-01
Included in the Prior Art Database: 2005-Feb-06
Document File: 2 page(s) / 46K

Publishing Venue

IBM

Related People

Capozzi, AJ: AUTHOR [+3]

Abstract

A processor's performance is improved by overlapping the prefetching of subsequent instructions while current instructions are executing. There can be almost complete overlap if (1) the instructions are sequential and (2) the subsequent instructions are available in readily available storage. Processors generally bring instructions from ready storage (cache) into the processor where they can be executed directly (instruction buffer). On processors with no prefetch, this is done for each instruction as it is needed. When a branch instruction is encountered which causes execution to jump to other than the next sequential instruction, the current contents of the instruction buffer are thrown away, including all prefetched instructions and the instruction buffer is reloaded with new data.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 59% of the total text.

Page 1 of 2

Local Branching

A processor's performance is improved by overlapping the prefetching of subsequent instructions while current instructions are executing. There can be almost complete overlap if (1) the instructions are sequential and (2) the subsequent instructions are available in readily available storage. Processors generally bring instructions from ready storage (cache) into the processor where they can be executed directly (instruction buffer). On processors with no prefetch, this is done for each instruction as it is needed. When a branch instruction is encountered which causes execution to jump to other than the next sequential instruction, the current contents of the instruction buffer are thrown away, including all prefetched instructions and the instruction buffer is reloaded with new data. This takes considerable time and occurs quite frequently. Many, if not most, branches go to an instruction which is: (1) within the same cache page (the segments of contiguous data that reside in the cache), or (2) within the adjacent cache pages. It is thus advantageous to construct an instruction buffer capable of holding one or more complete cache pages as in the drawing. A register is maintained for each cache page currently within the instruction buffer containing the page address. Whenever a successful branch is executed, (or even for NSI prefetch), the target address (specific high-order bits) is compared simultaneously with each cache page register. If a matc...