Browse Prior Art Database

Consistent and Integral log stream partition access in a 'single reader/deleter' / 'multi writer' environment

IP.com Disclosure Number: IPCOM000010143D
Original Publication Date: 2002-Oct-25
Included in the Prior Art Database: 2002-Oct-25
Document File: 1 page(s) / 40K

Publishing Venue

IBM

Abstract

Provided is a method for a single reader/deleter to manage the processing of a log stream whilst not affecting the many writers and casual readers which concurrently share the log stream. The method uses two cursors, one to record the oldest valid entry in the log and another to record the most recent entry read from the log.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 50% of the total text.

Page 1 of 1

  Consistent and Integral log stream partition access in a 'single reader/deleter' / 'multi writer' environment

Definitions of terms: A log stream is a chronological sequence of data blocks, each block identified by its unique block ID and unique timestamp. A log stream partition is any contiguous subset of a log stream.

      Many log streams can be shared by several systems that are running continuously (24x7), which leaves no window for offline processing of the data on the log streams.

      A log stream partition can be delimited by date/timestamp, timestamp or block ID. In practice these lead to many operational problems. Automatic submission of offline processing jobs need user input of the block ID or date/timestamp, which undermines its automatic nature. Using a timestamp to delimit a log stream partition appears to work at first, only to fall foul of the potential need to re-run or re-process the data, which may not actually run until the following day thereby changing the log stream partition end point. Consequently there is a need to be able;- to process(read/delete) a log stream while it is being written to, or read (read only) by

other systems to be able to repeat the processing on exactly the same log stream data (as in 1)

to be able to automate the offline processing of log streams

to be able to control when and how often each block is processed without affecting the

writers and 'casual' readers to be able to delete safely only the data that has been processed successfully.

      This can be achieved by exten...