Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Multi-Section Method for Data Cache Management

IP.com Disclosure Number: IPCOM000120618D
Original Publication Date: 1991-May-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 3 page(s) / 85K

Publishing Venue

IBM

Related People

Devarakonda, M: AUTHOR [+2]

Abstract

Disclosed is a method for data cache management, where the term "data cache" is used here to refer to memory managed by a file system, database system, disk control unit, etc., to store recently used file or disk blocks for possible reuse. Currently, data caches are typically organized as a single section where blocks are logically arranged in the least recently used (LRU) order, and the least recently used block is replaced when necessary. The multi-section organization described here can improve performance by making better replacement choices.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 56% of the total text.

Multi-Section Method for Data Cache Management

      Disclosed is a method for data cache management, where
the term "data cache" is used here to refer to memory managed by a
file system, database system, disk control unit, etc., to store
recently used file or disk blocks for possible reuse. Currently, data
caches are typically organized as a single section where blocks are
logically arranged in the least recently used (LRU) order, and the
least recently used block is replaced when necessary.  The
multi-section organization described here can improve performance by
making better replacement choices.

      In this multi-section method the data cache is divided into
four sections, referred to as NEW, SAVE1, SAVE2, and OLD, as shown in
Fig.  1.  Cache blocks in each section are arranged in LRU order, so
that each section has a most-recently used (MRU) and LRU position.
When a referenced file block is found in the cache (i.e., a cache
hit), the block is moved to the MRU position of a section according
to the transition-target function shown in Fig. 2. When a referenced
file block is not in the cache (i.e., a cache miss), the block is
brought into the cache and placed at the MRU position of the NEW
section.  Also, on a cache miss, a cache block may have to be
replaced if unused cache blocks are not available; in such a case,
the cache block at the LRU position of the OLD section is replaced.

      Each section has a soft limit on its size.  Initially, each
sec...