Browse Prior Art Database

Dynamic prefetcher adjustment interface

IP.com Disclosure Number: IPCOM000233783D
Publication Date: 2013-Dec-19
Document File: 3 page(s) / 45K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a dynamic prefetcher adjustment interface with an algorithm to decide the cache on-fly by monitoring the performance hit rate.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 75% of the total text.

Page 01 of 3

Dynamic prefetcher adjustment interface


Background:

Modern microprocessors are much faster than the memory where the program is kept, meaning that the program's instructions/data cannot be read fast enough to keep the microprocessor busy. Adding a cache can provide faster access to needed instructions/data.

Prefetching occurs when a processor requests an instruction/data from main memory before it is actually needed. Once the instruction/data comes back from memory, it is placed in a cache. When an instruction/data is actually needed, the instruction can be accessed much more quickly from the cache than if it had to make a request from memory.


Problem:

Unlike software prefetch, which depends on programmers to write the code to load the required data to cache. Hardware prefetcher depends on many different algorithms to predict the memory requirements.

However, incorrect predictions cause the penalty on performance due to different workload or object sizes .


Idea:

Modern microprocessors are much faster than the memory where the program is kept, meaning that the program's instructions/data cannot be read fast enough to keep the microprocessor busy. Adding a cache can provide faster access to needed instructions/data.

Prefetching occurs when a processor requests an instruction/data from main memory before it is actually needed. Once the instruction/data comes back from memory, it is placed in a cache. When an instruction/data is actually needed, the instruction can...