Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Disaster recovery management with event logging

IP.com Disclosure Number: IPCOM000237492D
Publication Date: 2014-Jun-19
Document File: 3 page(s) / 49K

Publishing Venue

The IP.com Prior Art Database

Abstract

Replicating the data at backup site needs much network bandwidth. The approach shown in this article is to record the instructions being applied on the data on disc since the last backup and apply the same instructions at backup site at regular intervals of time. This approach is to use less network bandwidth and to use less time during replication of data at regular intervals of time.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 3

Disaster recovery management with event logging

Replicating data at disaster recovery site with traditional approaches consumes much network bandwidth and more time. Periodically finding out delta of data and re-applying at backup site is the best approach among traditional procedures. However finding delta of data itself is time consuming and transferring the data is network consuming and re-applying the delta of data at backup site is again time consuming.

The instruction recording and replaying approach provided here uses less network bandwidth compare to the delta of data over a period of time.

Procedure:

The conventional procedure to create data or modify data is through a flow of requests from application to middleware to operating system environment to compute layer to storage i.e. data center. Operating environment or compute layer modifies the data on the storage disc. The middleware layer server processes the requests from various users and applications and send requests to operating environment/compute layer which in turn prepares the instructions and will be applied on the discs in the data center.

Disclosed is an approach sits between operating environment/compute layer to data center. To record the instructions being applied on the data center discs and to apply the recorded instructions at the backup site at periodic intervals or at an interval based on the size of recorded instructions.

1


Page 02 of 3

Advantages:

Less size of the instructions compared to the delta of data.

Less network bandwidth Easy to find the difference and record the instructions Easy to apply at the backup site

No dependencies of external tools, except the instruction processing layer at the backups site It is not mandatory to apply the instructions at regular intervals of time. It is better to apply at regular intervals of size.

Instruction logging/recording will definitely help the big data and analytics.

Boundaries considered:

Processing the instruction at backup site is ti...