Browse Prior Art Database

Sequential Program Prefetching in Memory Hierarchies Disclosure Number: IPCOM000131354D
Original Publication Date: 1978-Dec-01
Included in the Prior Art Database: 2005-Nov-10
Document File: 19 page(s) / 64K

Publishing Venue

Software Patent Institute

Related People

Alan Jay Smith: AUTHOR [+3]


University of California, Berkeley

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 6% of the total text.

Page 1 of 19


This record contains textual material that is copyright ©; 1978 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Contact the IEEE Computer Society (714-821-8380) for copies of the complete work that was the source of this textual material and for all use beyond that as a record from the SPI Database.

Sequential Program Prefetching in Memory Hierarchies

  (Image Omitted: Memory transfers due to a cache miss are costly. Prefetching all memory references in very fast computers can increase the effective CPU speed by 10 to 25 percent.)

Alan Jay Smith

University of California, Berkeley

Transfers of information between levels of an automatically managed memory hierarchy at the time the program references it (a miss) are usually costly in overhead operations and idle time. The fact that patterns of program execution and data access are largely sequential provides the opportunity to set up some means for predicting which sections (pages) of a program's memory address space are likely to accessed in the near future. By Prefetching these pages before they are actually needed, system efficiency can be significantly improved.

The problem is to relate the type of Prefetching to the page size and the memory size. We simulated these variables using several kinds of program address traces. It was found that Prefetching small page sizes, such as those used in cache memories, is potentially effective, but is critically dependent on the details of how the cache memory is implemented. This led us to investigate the architecture of the cache memories for the IBM 370/168 and Amdahl 470V/6, and we show how Prefetching might be properly implemented in these machines.

Memory hierarchies and Prefetching

Large modern computer systems frequently employ an automatically managed memory hierarchy such as the one illustrated in Figure 1. Information is usually transferred to higher levels (or "fetched") on a demand basis whereby, when a datum is referenced and is found to be absent from the highest level of the hierarchy, it is copied from the highest level at which it is resident to some or all higher levels. The event of a missing datum is called a "miss"; it is also known, in the case of a datum missing from main memory, as a page fault. The miss ratio (to a given level of the hierarchy) is the fraction of all memory references resulting in a miss ; (that is, a fetch from a lower level).

Memory reference patterns govern prefetching

Certain patterns are commonly found in the memory referencing behavior of computer programs, and it is possible to use these patterns to attempt to predict which sections of a program's address space will be referenced next. Information that is fetched before it is actually accessed is prefetched. Prefetching is possible between any two levels of the memory hierarchy. Because demand fetches usually involve high...