Browse Prior Art Database

Partial Good Cache Support Through Variable Set Size

IP.com Disclosure Number: IPCOM000100669D
Original Publication Date: 1990-May-01
Included in the Prior Art Database: 2005-Mar-16
Document File: 2 page(s) / 70K

Publishing Venue

IBM

Related People

Dorfman, BL: AUTHOR [+2]

Abstract

Disclosed is a design supporting partial good cache through variable set size. In processor systems involving cache designs the yield and reliability of the cache arrays is a problem. This article provides a novel solution.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 75% of the total text.

Partial Good Cache Support Through Variable Set Size

       Disclosed is a design supporting partial good cache
through variable set size.  In processor systems involving cache
designs the yield and reliability of the cache arrays is a problem.
This article provides a novel solution.

      This processor design implements a 64k byte 4-way set
associative data cache.  The data cache design is contained on 4 VLSI
chips, where each chip has 1/4 of the cache for each of the 4 sets.
The yield number for a chip with 16k bytes of array is low.  To solve
this problem, the processor design implemented a design to support
partial good cache arrays on a per set basis.  The cache chips are
built, tested and then marked as to the number and position of the
good array sets.  A processor card is built by mixing and matching
the good array sets.  The customer is then charged a premium for the
amount of good data cache contained on the processor card.  This
design gives an added bonus of increased system reliability.  Since
the specifying of good/bad cache array sets is programmable by
software, the system diagnostics upon finding a previous good set
bad, can configure the processor to avoid the bad cache array.

      The design for supporting partial good cache through variable
set size is illustrated in the following figure. All caches work on
the same principle of cache line replacement through a LRU/MRU (Least
Recently Used/Most Recently Used) algorithm.  This de...