Browse Prior Art Database

Distributed Cache for Network Computers Disclosure Number: IPCOM000014155D
Original Publication Date: 2000-Mar-01
Included in the Prior Art Database: 2003-Jun-19

Publishing Venue



Disclosed is a method for automatically sizing a Network Computer L2 (Level 2) cache with respect to the size of a system's memory. As computing devices get smaller and smaller, users continue to expect increases in function and performance. It is increasingly difficult to implement all required computing functions on smaller and smaller circuit boards so techniques must be developed to increase performance while decreasing the computing system physical size. This invention implements and distributes the L2 cache on and across pluggable system memory modules, saving room on the system board, saving power and cost in smaller systems, and improving (or maintaining) performance when and if a user decides to increase their system memory size. According to this invention, cache is implemented on and across memory modules (e.g. DIMMs, SIMM's, etc.). This cache can be implemented in discrete DRAM or SRAM technology and cache enabled memory modules can have a form factor and footprint similar to many of the present DIMM's and/or SIMM's. Thus, as a user increases their system memory size, the L2 cache size will automatically grow with the insertion of each additional DIMM. In the preferred embodiment, the cache is distributed at a level finer than that described above. Here, the cache is implemented at the chip level, potentially improving system performance even beyond any implementation at the higher memory module (DIMM or SIMM) level. In the case of DRAM, all of the cache can be implemented as a single page on each memory chip. The tag and store functions of the cache are all implemented directly in the memory chip architecture, in addition to the traditional row and column blocks of memory. This "memory chip integrated" cache page will get hits similar to a traditional or "on planar" cache for most cache accesses, therefore the page will remain open for most accesses. When a cache hit is detected at a particular DRAM chip, the traditional row and column precharge and set-up times may be dismissed, and the data becomes immediately available to the processor on the memory bus. If a memory word is spread across several DRAM chips, the integrated cache on each chip works in unison to quickly provide the accessed data. The traditional L2 cache solution is to provide a single cache next to the processor on the motherboard which, if performance is to be maintained, is required to become larger as system memory size increases. With this invention, a user automatically gets a larger cache with larger memory and the system motherboard may be designed in a smaller form factor due to movement of the cache function out across the system memory modules. 1