Browse Prior Art Database

Adaptive Data Compression Scheme

IP.com Disclosure Number: IPCOM000075489D
Original Publication Date: 1971-Sep-01
Included in the Prior Art Database: 2005-Feb-24
Document File: 4 page(s) / 75K

Publishing Venue

IBM

Related People

Kobayashi, H: AUTHOR

Abstract

An adaptive capability is introduced to a data compression system of prediction-comparison type. A learning machine or adaptive threshold logic is used as an adaptive predictor, which tries to minimize the number of prediction errors by adapting a prediction rule to local statistics of data. The predictor can operate always in an adaptive mode and the updated prediction rule is conveyed automatically to the recovery side. Additional advantages of the proposed scheme are its low cost and the simplicity in hardware implementation.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 39% of the total text.

Page 1 of 4

Adaptive Data Compression Scheme

An adaptive capability is introduced to a data compression system of prediction-comparison type. A learning machine or adaptive threshold logic is used as an adaptive predictor, which tries to minimize the number of prediction errors by adapting a prediction rule to local statistics of data. The predictor can operate always in an adaptive mode and the updated prediction rule is conveyed automatically to the recovery side. Additional advantages of the proposed scheme are its low cost and the simplicity in hardware implementation.

Among many, possible approaches for data compression, predictive coding (Elias1) has been known as one of the most practical and yet efficient source coding techniques. Applications of predictive coding to compression of pictorial data have been reported in several papers.2,3 A basic diagram of such a system is shown in Fig. 1, in which the source sequence {s(n)} is discrete both in time and in amplitude. In this disclosure, we assume, for simplicity, that s(n) takes on only two possible values 0 (= white) and 1 (= black). A generalization to multilevel cases is straightforward.

In Fig. 1, the memory stores a finite past of sequence {s(n)}; the predictor predicts the value of s(n) based on these values. The predicted value s(n) is then compared with the actual value s(n) and the corresponding error sequence takes e(n) = 1 if s(n) Not equal s(n) and e(n) = 0 if s(n) = s(n). An algebraic expression for this rule is simply given by e(n) s(n) - s(n) mod 2. The error sequence e(n) is encoded (usually by some type of run-length coding) and is then sent over a channel or stored in a file depending on applications.

At the recovery end, the inverse operation of encoding and prediction are performed and the original data {s(n)} is recovered. Transformation from {e(n)} back to {s(n)} is usually done by using a predictor in the recovery side, which has the same structure and prediction rule as the one at the transmission side (Fig.
1). Difficulties Associated with Predictor Design.

Consider a two-dimensional predictor as shown in Fig. 2, in which the signal value at the present position {s(n)} is predicted based on the values of its four neighbors. These four points correspond to s(n-1), s(n-L+1), s(n-L), s(n-L-1) stored in the memory, where L is the number of data points contained in each horizontal scan line. We may call these four elements memory set M(n), i.e.,
M(n) = {s(n-1), s(n-L+1), s(n-L), s(n-L-1)}. Since each element takes on 0 or 1, M(n) may take 2/4/ = 16 different values (patterns). If the data from the source is stationary, the following statements hold.
A) An optimal predictor (in a sense of the minimum prediction

error rate) is a fixed predictor which predicts s(n) based

on the conditional probability P(r)(s(n) /M(n)).
B) Prediction error probability decreases by extending the

memory set. In other words, the predictor performance

improves by including other points, as s...