Browse Prior Art Database

Alternative method for maintaining a non-volatile cache Disclosure Number: IPCOM000031018D
Original Publication Date: 2004-Sep-07
Included in the Prior Art Database: 2004-Sep-07
Document File: 5 page(s) / 72K

Publishing Venue



This article describes a method of preserving a large data cache across a power outage. The advantages compared to the current state of the art are described.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 27% of the total text.

Page 1 of 5

Alternative method for maintaining a non -volatile cache

Storage controllers and appliances often make use of an area of memory called a "fast write cache". Once a write destined for a disk drive has arrived in this memory, the storage controller/appliance can signal to the initiating system that the write has completed, as if it had actually got to the disk. This can result in substantial performance benefits (as the write to the cache memory will be significantly faster than the write to the disk). The write data can be destaged from cache memory to disk at a later point in time.

    The operation as described above is only valid if the data cache can be considered non-volatile: that is, the data will be preserved in the case of an unexpected power-down. If this is not the case then the customer data which the controller/appliance has signalled is safely written to disk will be lost. It is rare to find a cache system built from memory which is inherently non-volatile: these types of storage have too slow a write rate to be able to provide a performance advantage. Instead, a volatile memory medium is used (such as fast DRAM memory), which is then made non-volatile in the case of an unexpected power-outage.

    There are two methods is common use for preserving the contents of volatile memory in the event of an unexpected power-down.

Method (1) uses a backup battery to keep power applied solely to the cache memory. Typically, this memory will be placed a low-power state, where contents is preserved but read/write operations are disabled. The main disadvantages of this scheme are:

Data can only be preserved for a finite length of time (until the battery runs out). Most storage systems implementing this method of backup will guarantee 72 hours of cache retention.

As memory densities have increased, cache sizes have grown.Although memory densities have grown exponentially with time, the power is required per memory bit to hold the memory in it's standby state has remained more or less constant. Thus, the standby battery power required to preserve the cache data is growing with time, and at a rate faster than the rate of growth of battery technology energy density. This means that the area required to hold the backup battery is getting larger (example: 1998 product with a 32MB EDO DRAM cache required 2 AAA (10.5diax44.5mm) cells to achieve a 168-hour backup; a 2004 product proposal has a 4GB DDR-2 SDRAM cache and requires 24 long-fat AA
(18.2diax67mm) cells. This equates to a volumetric increase of 110x to support a cache increase of 128x for the same backup period, thus there has been an almost linear relationship between cache size and required backup battery volume area).

Method (2) uses a backup battery to keep power applied to the entire system only for long enough for the system to write the cache data to some non-volatile medium, e.g. a disk drive. The battery power is then removed and the system powers down. When mains power returns,...