Browse Prior Art Database

Methods for Link and Memory Compression (LMC) to Reduce Bandwidth Utilization Disclosure Number: IPCOM000196356D
Publication Date: 2010-Jun-01
Document File: 1 page(s) / 25K

Publishing Venue

The Prior Art Database


Disclosed mmethod is a lossless data compression/decompression method for reducing bandwidth requirements in processor interconnects, in particular the links between processors as well as processor to memory buses. Objective of this method is to reduce bandwidth requirements by static dictionary techniques.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 1 of 1

Methods for Link and Memory Compression (LMC) to Reduce Bandwidth Utilization

With increasing number of processor cores, off-chip bandwidth is becoming a scalability bottleneck. A bottleneck may exist in the processor to memory links. A bottleneck may exist in NUMA links between the processors (e.g. AMD and EXA or POwer chipsets). Data compression techniques remove redundancy
in data therefore reduce bandwidth and time necessary to transmit data. Dictionary based data compression techniques remove redundancy in an input data stream by replacing repeating symbols with pointers to a previous instance of the repeating symbol. A dictionary refers to a table typically consisting of rows each of which containing a symbol and a pointer. A symbol fetched from the input data stream is searched for in the table, and if found the symbol is replaced in the output stream with the corresponding pointer value pointing to a previous location of the symbol in the input stream. Thus instead of the repeating symbol, its pointer representation is sent out to the output stream. By design, pointers consume less storage than the symbols themselves, hence the output stream will be shorter than the input stream.

Dynamic dictionary techniques create a dictionary dynamically, on the fly by, while examining contents of the data block to be compressed. Larger data blocks on the average contain more repeating symbols than smaller blocks, as probability of finding repeats in a longer data stream is higher on the average. Therefore, large data blocks compress more efficiently than small data blocks on the average when using dictionary based techniques. Longer blocks is a disadvantage when used in conjunction with a computer cache memory, as longer blocks impact cache performance. Experiments show that blocks compress better when 512 bytes or longer. However in state of the art computers unit of memory access is typically 8 or 16 bytes therefore dynamic dictionary techniques cannot find sufficient repetition in small data units.

Static dictionary techniqu...