Browse Prior Art Database

Method And Apparatus For Implementing In-memory Database Logging System Disclosure Number: IPCOM000200489D
Publication Date: 2010-Oct-15
Document File: 4 page(s) / 60K

Publishing Venue

The Prior Art Database


This invention provides novel method and apparatus to implement an in-memory logging system for the current DBMS.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 46% of the total text.

Page 01 of 4

Method And Apparatus For Implementing In -memory Database Logging System

Logging and recovery mechanisms play an important role in the modern database management systems (DBMS) to guarantee the atomicity, consistency, isolation and durability properties (ACID properties) of transactions over crashes or hardware failures.

A database log is one or multiple files stored on disk, which is updated before any changes are made to the data volumes (write-ahead-logging)[1]. If, after a start, the system finds the DBMS in an inconsistent state or it has not been shut down properly, the DBMS reviews the database logs from the last checkpoint to the end of the log, and reapplies logs if the changes have not made on the corresponding data. Additional, for uncommitted transactions, the DBMS rolls back the changes made by these transactions. These are called "redo" and "undo" phases during recovery, which are done to are done ensure atomicity and durability of transactions.

Writing and flushing logs to log files on disk becomes an important step that can not be bypassed during database processing transactions. But, because of the slow disk I/O barrier, significant overhead is incurred in writing and flushing log files in the current DBMS. This is a major problem, especially in the current market environment where big enterprise or web companies always need to process huge numbers of transactions in a short period of time. For instance, financial companies must be able to support a large number of transaction requests from their websites and local branches.

To avoid too many I/O writings in too little time, most modern DBMSs use caches (log buffer) in memory to buffer disk writing of logs, and a system-throughput-oriented approach called "group committing" tries to get around high database transaction latency by packing multiple application transactions into a single database transaction. However, from an application transaction perspective group committing degrades average response time even further, and leads to greater lock contention and memory usage. What is worse, it complicates application recovery when system transactions abort, because a simple retry of an application transaction is not sufficient.

Furthermore, traditional storage protocols require that disk data be accessed in multiples of fixed-size blocks. In log buffer in main memory, log data is also formatted following disk block data structures. So when log data is written to log buffer, the data requires transformation to block aligned data structures, which either adds to latency or system implementation complexity.

For the modern DBMS, the problem is finding a method to avoid the disk I/O latency to improve database performance and simplify the system design and implementation become important issues.

Today, with the high-speed network, data persistence in main memory is guaranteed by network replication. In the recent future, a new persistent memory, storage class memory (SCM...