Browse Prior Art Database

Distributed Cache for Network Computers

IP.com Disclosure Number: IPCOM000014155D
Original Publication Date: 2000-Mar-01
Included in the Prior Art Database: 2003-Jun-19
Document File: 1 page(s) / 39K

Publishing Venue

IBM

Abstract

Disclosed is a method for automatically sizing a Network Computer L2 (Level 2) cache with respect to the size of a system's memory. As computing devices get smaller and smaller, users continue to expect increases in function and performance. It is increasingly difficult to implement all required computing functions on smaller and smaller circuit boards so techniques must be developed to increase performance while decreasing the computing system physical size. This invention implements and distributes the L2 cache on and across pluggable system memory modules, saving room on the system board, saving power and cost in smaller systems, and improving (or maintaining) performance when and if a user decides to increase their system memory size.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 55% of the total text.

Page 1 of 1

Distributed Cache for Network Computers

Disclosed is a method for automatically sizing a Network Computer L2 (Level 2) cache with respect to the size of a system's memory. As computing devices get smaller and smaller, users continue to expect increases in function and performance. It is increasingly difficult to implement all required computing functions on smaller and smaller circuit boards so techniques must be developed to increase performance while decreasing the computing system physical size. This invention implements and distributes the L2 cache on and across pluggable system memory modules, saving room on the system board, saving power and cost in smaller systems, and improving (or maintaining) performance when and if a user decides to increase their system memory size.

According to this invention, cache is implemented on and across memory modules (e.g. DIMMs, SIMM's, etc.). This cache can be implemented in discrete DRAM or SRAM technology and cache enabled memory modules can have a form factor and footprint similar to many of the present DIMM's and/or SIMM's. Thus, as a user increases their system memory size, the L2 cache size will automatically grow with the insertion of each additional DIMM.

In the preferred embodiment, the cache is distributed at a level finer than that described above. Here, the cache is implemented at the chip level, potentially improving system performance even beyond any implementation at the higher memory module (DIMM or SIMM) l...