Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Effective INTER-Parallel Schemes for Compression/Decompression Speed-Up

IP.com Disclosure Number: IPCOM000118636D
Original Publication Date: 1997-Apr-01
Included in the Prior Art Database: 2005-Apr-01

Publishing Venue

IBM

Related People

Cheng, J: AUTHOR [+3]

Abstract

Data Compression allows the effective use of storage capacity and communication bandwidth. REAL-TIME Compression or Decompression (at the source data rate) often limits the computational complexity of the compression algorithm. Simpler compression algorithm, i.e., run-Length encoding, often is effective to a restricted classes of applications. More robust compression algorithms normally have more effective Model unit and/or Coding unit, i.e., the LZ type compression algorithms use the history buffer as the Model Unit. They often require more hardware allocation (gates) and scheduling depth (time, minimal sequential steps of execution).

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 22% of the total text.

Effective INTER-Parallel Schemes for Compression/Decompression Speed-Up

      Data Compression allows the effective use of storage capacity
and communication bandwidth.  REAL-TIME Compression or Decompression
(at the source data rate) often limits the computational complexity
of the compression algorithm.  Simpler compression algorithm, i.e.,
run-Length encoding, often is effective to a restricted classes of
applications.  More robust compression algorithms normally have more
effective Model unit and/or Coding unit, i.e., the LZ type
compression algorithms use the history buffer as the Model Unit.
They often require  more hardware allocation (gates) and scheduling
depth (time, minimal sequential steps of execution).

      This disclosure addresses the speed-miss-matching problem where
the intended compression/decompression unit delivers a sub-REAL-TIME
speed.  Higher throughput can be achieved through allocation of
parallel hardware units.  Two basic parallel-speed-up approaches are:
  INTRA-Parallelism    The parallel units are coordinated
                        internally.  The multiplicity of the
                        parallel units appears as a single
                        unit, or TRANSPARENT, from outside.
  INTER-Parallelism    All parallel units are coordinated
                        externally.  The granularity of the
                        replicated units are visible from
                        outside.

      The INTER-Parallel scheme provides seemingly easier hardware
speed growth, where i units of hardware are used to achieve i factor
of speedup.  The hardware allocation (or size) is proportional to the
speed-up factor.  For the INTRA-Parallel scheme, the hardware growth
is more moderate than the INTER-parallel scheme.  But, the
INTRA-Parallel scheme often cannot re-use the common building block
like INTER-parallel  scheme does.  The INTRA-Parallel scheme presents
no system overhead at  each increments of speed growth, since the
internal parallelism is transparent to outside.  The INTER-Parallel
scheme, however, could present system-level impacts on scheduling,
source partitioning choice,  compression loss, interim stream buffer,
and control overheads for compression and decompression.  An
effective INTER-Parallel scheme should minimize these impacts.

      Efficient INTER-Parallel schemes are proposed in this
disclosure, which provide the effective REAL-TIME throughput with
sub-REAL-TIME compression/decompression units.  The EFFICIENCIES of
the INTER-Parallel schemes can be evaluated with the following
aspects:
  Scheduling Factor   Since multiple compressing or decompressing
                       units are used, keeping the most units busy
                       is essential for higher throug...