Browse Prior Art Database

Multiplication-Free Compression of Continuous Tone Image Data

IP.com Disclosure Number: IPCOM000120985D
Original Publication Date: 1991-Jul-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 1 page(s) / 35K

Publishing Venue

IBM

Related People

Feig, E: AUTHOR [+2]

Abstract

Standard compression and decompression of continuous tone image data on machines such as the IBM PC and PS/2*, which multiply much slower than add, may take very long times. Compression/decompression techniques are introduced which avoid any multiplications and at the same time achieve high compression ratios. The number of additions is also very low. These will run between 2 to 3 times as fast as standard JPEG compression/decompression algorithms (DCT with quantization followed by Huffman or arithmetic coding) on the IBM PC or PS/2 or similar machines. These can be implemented in very low-enery-consuming hardware.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 100% of the total text.

Multiplication-Free Compression of Continuous Tone Image Data

      Standard compression and decompression of continuous tone
image data on machines such as the IBM PC and PS/2*, which multiply
much slower than add, may take very long times.
Compression/decompression techniques are introduced which avoid any
multiplications and at the same time achieve high compression ratios.
The number of additions is also very low.  These will run between 2
to 3 times as fast as standard JPEG compression/decompression
algorithms (DCT with quantization followed by Huffman or arithmetic
coding) on the IBM PC or PS/2 or similar machines.  These can be
implemented in very low-enery-consuming hardware.

      The compression scheme utilizes a 2-dimensionsal Haddamard
transform on 8 x 8 blocks of the image, followed by scaling by
integer values which are powers of 2, followed by Huffman coding.
The 2- dimensional Haddamard transform is done in row-column fashion.
Each block is processed with 16 x 24 = 384 additions followed by
shifts corresponding to the scaling operators.

      Decompression involves Huffman decoding, descaling with shifts,
and inverse Haddamard transforming with additions only.
*  Trademark of IBM Corp.