Browse Prior Art Database

Multiple Caching Schemes in a Lan-Attached Server

IP.com Disclosure Number: IPCOM000115258D
Original Publication Date: 1995-Apr-01
Included in the Prior Art Database: 2005-Mar-30
Document File: 4 page(s) / 121K

Publishing Venue

IBM

Related People

Eshel, M: AUTHOR [+4]

Abstract

Disclosed is the process by which the Workstation LAN File Services/VM (WLFS) Front End Processor ("FEP") uses two different caching algorithms to optimize memory usage and throughput to clients.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 47% of the total text.

Multiple Caching Schemes in a Lan-Attached Server

      Disclosed is the process by which the Workstation LAN File
Services/VM (WLFS) Front End Processor ("FEP") uses two different
caching algorithms to optimize memory usage and throughput to
clients.

      In the WLFS product, SMB-based clients (clients using the
Server Message Block protocol to access remote services from a
Server) are handled by the host server through an enhanced OS/2 LAN
Server "Front End Processor".  This front end server handles the
NetBIOS LAN communica tions and can provide file and other services
to clients from local disks and devices as it normally does.
Additionally, with the WLFS enhance ments, it will serve as a
pipeline and filter for client requests for host file services.

      Clients of LAN Server and WLFS will request file operations,
typically opening a file, reading and/or writing the contents of that
file, and then closing the file.

The flow of information in the WLFS case is then:
  o  from the client to the LAN-attached server (FEP)
  o  from the FEP to the host
  o  response from the host to the FEP response forwarded from the
FEP
      to the client

      In order to speed responses to a client, a caching subsystem
was added to the WLFS FEP.  In the original WLFS release this caching
subsystem was designed to cache file data blocks in 4K increments,
and be persistent.  It was thus designed to mimic as closely as
possible a disk- block cache maintained by a device driver for a file
system.  However, note that the FEP file cache is remote from the
actual disk hardware and thus unaware of actual hardware concepts
such as disk block number.  Instead, the data blocks were labeled by
file and offset rather than by actual disk block number.  Since SMB
file read and write requests are made from client on the basis of a
file handle and offset/length, this caching algorithm is able to
efficiently service client SMB file read/write requests.

      Note that the cache data blocks are 4K in length and are kept
on a 4K boundary.  Thus, arbitrary client reads may be serviced by
assembling data from multiple cache blocks.

      Data cached in this algorithm was available to all clients
operating on the same file, and would be persistent across client
close of the file, until the blocks were reclaimed in an LRU scheme.

      This caching scheme was most effective when clients attempt to
pass "small" (less than 32K) block requests.  At larger request
sizes, this caching scheme loses its advantage relative to a direct
pass-through to the host due to the time spent moving data into and
out of 4K cache buffers.

      Clients attempting to use multimedia interfaces tend to use
large amounts of data that must be delivered with minimal skew in
delivery rates.  Multimedia, particularly video files, are also
typically not going to benefit from a persistent data cache because
once a frame has been displayed, it is p...