Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Direct Access Storage Device Cache Segment Management

IP.com Disclosure Number: IPCOM000113445D
Original Publication Date: 1994-Aug-01
Included in the Prior Art Database: 2005-Mar-27
Document File: 6 page(s) / 233K

Publishing Venue

IBM

Related People

Beardsley, BC: AUTHOR [+4]

Abstract

A method for DASD cache management is disclosed. A Segment LRU cache management algorithm is proposed as a refinement to managing data on a track basis. When the format of a track can be predicted by the DASD cache subsystem, a Segment Caching scheme is proposed. A method to determine when to use Segment Caching versus track caching is given. Both cache management policies can coexist with each other and can also coexist with track caching.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 22% of the total text.

Direct Access Storage Device Cache Segment Management

      A method for DASD cache management is disclosed.  A Segment LRU
cache management algorithm is proposed as a refinement to managing
data on a track basis.  When the format of a track can be predicted
by the DASD cache subsystem, a Segment Caching scheme is proposed.  A
method to determine when to use Segment Caching versus track caching
is given.  Both cache management policies can coexist with each other
and can also coexist with track caching.

      High performance DASD subsystems today include random access
memory to store recently referenced data in hopes of not having to
suffer the mechanical delays of DASD on subsequent accesses.  The
method to manage this memory, commonly called a cache, is usually on
a physical DASD track basis.  Typically, the cache is managed on a
least recently used (LRU) replacement algorithm.  Advanced subsystems
such as the IBM 3990 models 3 and 6 use segmentation hardware for a
more efficient use of the cache.  Data is stored from first record
referenced in the chain until end of track, called partial track
staging.  Disjoint cache segments can be used to store a track image.
Usually segments for a track are managed as a single, logical entity.

      The simplest form of segment caching can be performed by
chaining each segment associated with a physical track onto the LRU
list rather than having one entry on the LRU list for the entire
track.  When a segment is re-referenced by an application, it is
promoted to the top of the LRU list independent from the other
segments that are associated with the physical track it represents.
This means that all segments of a track cache are subject to
individual LRU demotion.  This will tend to keep the most recently
referenced data on a segment granularity rather than a physical track
granularity in cache.  Improved cache hit ratios should result using
this "Segment LRU" cache management algorithm which in turn leads to
increased I/O performance.  A cache hit ratio is simply the ratio of
accesses to a cached subsystem that were found in the cache, called
hits, to all accesses to the cached subsystem, which include both
hits and misses.  The higher the cache hit ratio is, the more
effective the cache management policy is.

The Segment LRU cache algorithm can be described in more detail as
follows:

o   When a cache track miss occurs, allocate cache segments as
    currently for a partial track stage.  However, chain each segment
    to the LRU list individually.  Promote each segment to the top of
    the LRU at the time of the track image allocation and staging.

o   All segments of a track cache image are subject to individual LRU
    demotion.  When all segments are demoted, it implies that the
    track image is demoted.

o   As individual segments reach the bottom of the LRU:

    1.  Modified records for the segment must be written to DASD.
        Fo...