InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method for maintaining a multi-level cache directory

IP.com Disclosure Number: IPCOM000198061D
Publication Date: 2010-Jul-24
Document File: 2 page(s) / 19K

Publishing Venue

The IP.com Prior Art Database


Disclosed is a method for maintaining a directory of high level cache, multiple lower-level caches, and cache inclusion, where addresses present in low level caches are also present in high-level cache. The method consists of a conventional directory for high-level cache and a directory for each low-level cache in which tags are not addresses, but indexes into a high-level directory.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 55% of the total text.

Page 1 of 2

Method for maintaining a multi-level cache directory

A modern computer uses several levels of cache to improve the latency of memory access while reducing the bandwidth required. Low level caches are smaller, close to the processor and provide the lowest latency access. High level caches are much larger, further away from the processor, and take much longer to access. The data organized in such a hierarchy of caches must be kept coherent. When a processor needs data that is not in one of its caches is sends the request to all of its peers. Each one inspects its own caches, and responds if the requested data is found. In a multi-level cache hierarchy, each level in the hierarchy must be inspected.

The need to support such interrogations complicates the design of lower level caches, making them larger (e.g. by adding additional read ports to the SRAM arrays) or slower (e.g. a read port must be shared between processor requests, and snoop interrogations).

One existing alternate design maintains cache inclusion between the higher-level cache and the lower-level cache. This is often combined with a write-through design for the lower-level cache. This means that the most recently modified copy of any datum is guaranteed to reside in the upper-level cache, obviating the need to interrogate the lower level cache. The write-through design, however, requires considerable bandwidth. Every write operation must be transmitted to the upper-level cache. This problem is exacerba...