Browse Prior Art Database

Limiting Data Loss on Large Information Blocks Stored with Adaptive Data Compression Techniques

IP.com Disclosure Number: IPCOM000105139D
Original Publication Date: 1993-Jun-01
Included in the Prior Art Database: 2005-Mar-19
Document File: 4 page(s) / 155K

Publishing Venue

IBM

Related People

Mosser, H: AUTHOR [+2]

Abstract

When adaptive data compression techniques are used to process an information block, proper decompression of the information block requires that all encoded data be available. More specifically, modification of any single byte in the encoded data can result in the loss of all subsequent data in the information block.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 37% of the total text.

Limiting Data Loss on Large Information Blocks Stored with Adaptive Data Compression Techniques

      When adaptive data compression techniques are used to process
an information block, proper decompression of the information block
requires that all encoded data be available.  More specifically,
modification of any single byte in the encoded data can result in the
loss of all subsequent data in the information block.

      Certain applications may record large quantities of data (e.g.
real time sensor or telemetry data) for which the loss of small parts
of the data can be tolerated, but loss of a significant quantity of
the data cannot be tolerated.  Other applications may also record
large quantities of data and additionally, have a requirement to not
lose any data.  For these applications, the effort required to
recover from a loss of data is generally related to the amount of
data lost and as such, limiting the extent of the data loss is also
desirable.

      In applications which use large information blocks (i.e. > > 64
bytes) in conjunction with adaptive data compression techniques, a
mechanism is described to arbitrarily limit the amount of data which
could be lost as the result of an error in the encoded data without
significantly increasing the quantity of data required to represent
the original information block in encoded form.

      This article describes a method to limit the amount of data
which can be lost as the result of an error within a string of
encoded bytes which were produced as the result of some adaptive
algorithm processing an information block.  The technique was
initially developed for use in the recording of very large records on
a tape device in conjunction with an adaptive data compression
algorithm.  However, the technique is applicable to other device
types and recording mediums (e.g. telecommu nications) and is also
applicable to other adaptive algorithms (e.g.  encryption).

      In order to prevent an error in the string of encoded data from
affecting all subsequent encoded bytes for the entire information
block, the information block is broken into subsets of data called
sub-blocks.  Each sub-block is then processed by the adaptive
algorithm as if it is a separate information block, creating its own
set of control data which is used to subsequently decode the data.
As such, any error within a given sub-block can only affect the
decoding of subsequent data within that sub-block.  Other sub-blocks
associated with the information block are not affected by the error.
In order to utilize this concept, a mechanism is required to allow
the device to associate the independent sub-blocks such that they are
treated as a single information block when communicating with the
host as described subsequently.

      In the implementation developed for a tape device all
sub-blocks except the last one associated with a given information
block are the same size.  However, the selection o...