Browse Prior Art Database

Optimal checkpoint interval based on intelligent model using database interval factors.

IP.com Disclosure Number: IPCOM000214727D
Publication Date: 2012-Feb-03
Document File: 3 page(s) / 75K

Publishing Venue

The IP.com Prior Art Database

Abstract

In this paper we have derived a formula for calculation of optimal checkpoint interval. The Formula considers input failure rate, checkpoint overhead, redo time and number of log records as input variables. For check point overhead and redo time we have analyzed the effect of disk parameters such as sector size, number of sectors per track, spindle speed and log record entry size. The proposed formula has been simulated in java.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 49% of the total text.

Page 01 of 3

Optimal checkpoint interval based on intelligent model using database interval factors .

As found in prior art of computer science (eg. Wikipedia), ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantees that database transactions are processed reliably. Atomicity requires that database modifications must follow an "all or nothing" rule. Each transaction is said to be atomic. If one part of the transaction fails, the entire transaction fails and the database state is left unchanged. The consistency property ensures that any transaction the database performs will take it from one consistent state to another. Consistency states that only consistent (valid according to all the rules defined) data will be written to the database. Isolation refers to the requirement that no transaction should be able to interfere with another transaction at all. Durability property says that once transaction is committed it should be reflected in database. As mentioned in [1], to accomplish this, specific processes transfer all updated pages from Main Memory buffer to secondary storage once a specific transaction is committed. Now considering a situation of a page in buffer getting modified very frequently in a highly transactional system , then to transfer this page into stable storage with the same frequency introduces very high overhead on database. To reduce this overhead database maintains log records. So if system fails database recovery manager reads these log records and replays the transactions in the redo log files. During this recovery process all uncommitted transactions are rolled back and the committed transactions are rolled forward. At the completion of this recovery process the database is in a consistent state. If system maintains only log records and does not update data in stable storage then in case of system failure restart recovery overhead would be very high. To keep database durable and consistent and to maintain minimum restart recovery overhead, additional measures are taken into consideration that is called as checkpoints. Generating a checkpoint means collecting information periodically in a safe place, which has the effect of defining and limiting the amount of Redo recovery required after the crash [1]. The important question therefore would be the frequency of checkpoint or the Checkpoint interval and the factors that decide the optimal checkpoint interval. More frequent Checkpoint's means additional overhead on the system and less frequent Checkpoints means that Database Crash recovery will be longer since more Redo logs would have to be applied. Therefore finding an optimal Checkpoint frequency based on the volume and nature of transactions in a database is of importance. Even though good research has been done in [2] to calculate formula for optimal checkpoint interval but formula calculated in [2] has not considered redo time, number of log records and disk parameters. A method de...