Browse Prior Art Database

Disk Caching Bypass And Overlapped Request Synchronization In an Interrupt-driven, Multitasking Environment

IP.com Disclosure Number: IPCOM000099702D
Original Publication Date: 1990-Feb-01
Included in the Prior Art Database: 2005-Mar-15
Document File: 9 page(s) / 343K

Publishing Venue

IBM

Related People

Krantz, JI: AUTHOR [+4]

Abstract

This article describes the use of a cache bypass and the "in use" data structure in a multiprocessing system.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 20% of the total text.

Disk Caching Bypass And Overlapped Request Synchronization In an Interrupt-driven, Multitasking Environment

       This article describes the use of a cache bypass and the
"in use" data structure in a multiprocessing system.

      A multiprogramming/multitasking operating system that supports
functions specifically for moving data between primary storage and
disk (i.e., segment swapper supporting memory overcommit) can flood a
disk cache and flush useful pages to which the principle of locality
is applicable.  The design disclosed herein includes a cache bypass
interface that allows functions, such as the swapper, to circumvent
in a coordinated fashion the caching support for disk I/O. This
interface is surfaced at the file system application programming
interface (API) layer and conveyed through to the interface layer
between the file system and the disk device driver.

      The multiprogramming/multitasking operating system problem of
synchronizing the processing of multiple threads of execution in
various stages of disk I/O affecting the cache also is solved.  The
threads include an interrupt thread signaling the completion of an
I/O processing phase, multiple task time threads for normal disk I/O
with caching, and task time threads attempting to bypass the cache to
perform disk I/O.  A cache page "in use" data structure is defined in
combined use with the conventional cache page "cached" indication
data structure.  This combined use of these data structures defines
separate states of the cache pages for "in use", "cached", and
"cached and in use". These data structures are based upon the
physical organization of the physical disk space and can span
multiple disk drives.

      Referring to the drawings, Fig. 1 illustrates in block diagram
the caching data structures of bit maps and descriptors.  Fig. 2
illustrates in block diagram the caching data structures.  Fig. 3
shows the least recently used (LRU) chain within the conflict lists.
The cache memory is organized into three sections as follows:
      1.   This section contains two bit maps containing information
on every physical 2K page that exists as physical disk storage.  The
size of this section is determined by the amount of physical fixed
disk storage.  These two bit maps are called the "in use" physical
page map and the "cached" physical page map.  For 140 megabytes of
fixed disk storage, less than 18K of storage is needed for these
maps.
      2.   This section describes the cache organization.  It is
limited to 64 K in size.  It contains cache page descriptors.  There
is one cache page descriptor for each 2K cache page.  It also
contains the hashing table and the linked list anchors for the cache
page descriptors.  The cache page descriptors are separated from the
cache pages for two reasons.  First, in a bimodal device driver
environment the overhead associated with establishing addressability
to many different segments above th...