One-Cycle CACHE Design
Original Publication Date: 1988-Dec-01
Included in the Prior Art Database: 2005-Feb-15
A technique is described whereby a cache memory is organized to achieve one-cycle cache operation using off-the-shelf static random-access memory (RAM) devices. Described is the method used in achieving one-cycle cache accessing by taking advantage of most recently used (MRU) hit ratio's concepts of prior art [*]. (Image Omitted) The cache array, as shown in Fig. 1, is organized to support double word (DW) accessing between the CPU, main memory unit (MMU) and CACHE. The main storage data bus width is reduced to 64 bits, so that any data movement between the cache and the storage has to go through the cache data bus. Therefore, even with line buffers in the MMU, the advantage of having a main storage bus wider than the cache bus is very small in a uniprocessor environment.