Browse Prior Art Database

Efficient Management of Remote Disk Subsystem Data Duplexing

IP.com Disclosure Number: IPCOM000117168D
Original Publication Date: 1996-Jan-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 4 page(s) / 129K

Publishing Venue

IBM

Related People

Cohn, O: AUTHOR [+9]

Abstract

A technique is disclosed for efficiently maintaining an update consistent copy of a file system spanning many disks and control units at a remote site. The disclosed technique uses the notion of Consistency Groups and manages update activity in a unique way such that it has the following advantages: o It does not require a complete serialization of all updates to the primary file system. o It reduces the amount of interaction required for the purpose of serializing the different Control Units in the primary file system. o It enables concurrent updates in the Remote Site.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 41% of the total text.

Efficient Management of Remote Disk Subsystem Data Duplexing

      A technique is disclosed for efficiently maintaining an
update consistent copy of a file system spanning many disks and
control units at a remote site.  The disclosed technique uses the
notion of Consistency Groups and manages update activity in a unique
way such that it has the following advantages:
  o  It does not require a complete serialization of all updates to
      the primary file system.
  o  It reduces the amount of interaction required for the purpose of
      serializing the different Control Units in the primary file
      system.
  o  It enables concurrent updates in the Remote Site.

      A file system is update consistent if it represents a state of
its files after applying a series of update operations in their
logical sequence.

Let L = [U sub 1, U sub 2, ... U sub n] be a logical sequence of
updates where each U sub i = (D sub i, R sub i) is an update to
record R sub i of device D sub i.  Given a logical sequence of
updates L, we partition it into segmented sequence SL = [S sub 1, S
sub 2, ... S sub l] of Consistency Groups S sub 1 = [U sub <1,1>, U
sub <1,2>, ... U sub <1,<m sub 1>>], S sub 2 = [U sub <2,1>, U sub
<2,2>, ... U sub <2,<m sub 2>>], ... S sub l = [U sub <l,1>, U sub
<l,2>, ... U sub <l,<m sub l>>] such that:
  1.  If the updates are applied in the order defined by SL, a
       file system which is equivalent to a file system obtained
       as a result of applying to it the updates in the order
       defined by L is received.
  2.  After the application of each Consistency Group S sub i,
       the file system will be update consistent.
  3.  The order of applying updates in a given Consistency Group S
       sub i is arbitrary.

      The algorithm for Consistency Groups creation is implemented by
cooperative processes, one running in one of the Primary Site
computers that is connected to all the Primary Site Control Units
(hereafter, this  computer will be denoted Serializer) and one
running in each of the Primary Site Control Units.  The general
outline of the algorithm is as  follows:
  1.  Each Control Unit locally keeps track of the addresses of all
the
       updates for the part of the file system under its control.  If
an
       update which covers another update is received, the address
that
       used to identify the covered update will be used hereafter to
       identify the covering update.
  2.  Periodically, the Serializer retrieves the addresses of the
       updates accumulated by the Control Units.  Using these
addresses,
       the Serializer reads the updated records and forms one
       Consistency Group.  Whenever the Serializer retrieves
addresses
       of updates from a Control Unit, the Control Unit ignores the
       addresses that it kept so far and begins new tracking.

To merge...