Browse Prior Art Database

Scheme to Bypass Cache for Big, One Time Reads

IP.com Disclosure Number: IPCOM000106494D
Original Publication Date: 1993-Nov-01
Included in the Prior Art Database: 2005-Mar-21
Document File: 2 page(s) / 78K

Publishing Venue

IBM

Related People

Knox, T: AUTHOR [+3]

Abstract

It was found that big data reads from a LAN server were very slow when the data had to come from disk because the data was not already in cache and the cache was full. Also, caching one time reads was a costly waste of the cache.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Scheme to Bypass Cache for Big, One Time Reads

      It was found that big data reads from a LAN server were very
slow when the data had to come from disk because the data was not
already in cache and the cache was full.  Also, caching one time
reads was a costly waste of the cache.

      The IBM LAN Server 2.0 Advanced Server package includes a
module (HPFS386) that is both a filesystem and file server.  The
filesystem keeps an in memory cache of the file data on disk.  Of
course, the cache can only keep a small portion of the disk data,
therefore, only the most recently read or written data is kept in
cache.

      The filesystem provides an interface to the server that allows
the server to transmit data directly from cache onto the network.
Without transmitting directly from cache, data must be copied from
cache into a server buffer from which the data can be transmitted
onto the network.  For big read requests, for example 64Kb, the
server uses the cache transfer mechanism, saving the costly time it
takes copy the data.

      A performance analysis showed that there are times when the
cache transfer mechanism for big read requests was extremely slow.
These were times when the data being requested was not in cache, and
thus had to be read off disk before it could be transmitted.  These
poor performing reads of big data off disk were made even worse when
the cache was full, which meant existing cache data had to be written
to disk before the read could commence.

      Therefore, it was realized that performance could be improved
if the cache could be bypassed in these big read cases.  Instead of
reading into cache, the data would be read into a server buffer and
transmitted onto the network from the buffer.  But, the idea of not
reading a file into cache defeats the purpose of the cache if there
will be requests for that file in the near future.  Therefore, it was
realized that big read requests should bypass the cache only on files
that are most likely being read just one time.

      The following section describes the scheme used to bypass the
cache for big read requests.

      The server receive...