Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

A method to use file locking to ensure data integrity in a recoverable data processing system in a highly available environment

IP.com Disclosure Number: IPCOM000183397D
Original Publication Date: 2009-May-21
Included in the Prior Art Database: 2009-May-21
Document File: 6 page(s) / 82K

Publishing Venue

IBM

Abstract

The article describes a method employing file locking in networked storage to provide data integrity of a recoverable data processing system in a highly available environment without the need for a separate high availability coordinator.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 24% of the total text.

Page 1 of 6

A method to use file locking to ensure data integrity in a recoverable data processing system in a highly available environment

Commercial data-processing software which manages persistent data is often concerned with ensuring its integrity in the event of failures. To this end, in particular when there are transactions involved, the software will use some kind of disk-writing protocol to ensure that the data written to disk is recoverable after failures. This guarantees that the volatile state of the system can be rebuilt accurately from the persistent state in the file system. If this is guarantee is not met, the integrity of the data will be compromised. For example, a transfer of funds between two bank accounts might be performed only partially with the money leaving one account but not arriving in the other.

An example of such a disk-writing protocol is known as write-ahead logging.

In this protocol, each update to a piece of recoverable data is written first to a sequential file called a log which is a historical record of the activity of the system, and then to the real copy of the data in a data file. The protocol requires that the data is actually stably recorded in the log file, as opposed to held in some file system buffer, before the data is written to the data file. If these two write operations were to happen in the wrong order, it might be possible in failure cases for data integrity to be compromised. Successful recovery depends on the fact that the data in the log at least as up to date as the actual data files, or conversely, that the data files are never more up to date than the log.

    In a highly available (HA) computing environment, the same principles apply but the implementation of them needs to be especially careful. A combination of hardware and software technologies is typically combined to provide quick recovery of critical programs from hardware and software failures. The environment is designed to eliminate single points of failure. For example, the environment generally consists of a number of loosely coupled computers sharing resources such as disk drives. The critical programs are capable of running on any of a set of computers and hardware resources such as disk drives are shared amongst the computers. A hardware or software failure which leads to unavailability of a critical program can be remedied by rapidly moving it to another computer restoring its availability.

    Environments such as this were historically managed by special pieces of software known as high availability coordinators or frameworks. Examples include such as IBM's PowerHA *. The HA framework provides management of the hardware and software components. It monitors the components and takes responsibility for moving resources in response to failures. With the availability of modern network file systems, it is possibl...