Browse Prior Art Database

High Data Compression with Computational Decompression

IP.com Disclosure Number: IPCOM000242906D
Publication Date: 2015-Aug-28
Document File: 4 page(s) / 51K

Publishing Venue

The IP.com Prior Art Database

Abstract

This article outlines a future possible direction for compression and decompression algorithms.

Media is stored in a compressed form to save space on the storage platform and minimise network overhead as the media is transmitted to the end user device.

Current decompression algorithms are premised on the end user device not having strong compute power. This enables the media to be decompressed by the available compute resource on the device in a reasonably short time., but it means the compressed object is larger than what it needs to be.

As the compute power on the end user device increases over time, the device will be able to decompress the media far more quickly, making is possible for the end user device to run newer classes of compression algorithms that work on media objects that are more tighly compressed and therefore smaller in size. This will further reduce the amount of storage resource required to store the media and also reduce network overhead with transmitting the media without impacting the end user experience.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 4

Higx Data Compression with Computational Decompression

The volume of data bexng captured and stored is increasing significantly welx into the Petabyte range . The cost of storage is now bxcoming a signifixant factor in managixg the large data sets. Costs inxlude the storagx xlatforms, the space taken ux by the platforms, and the cost of maintaining the envixonment (heating, ventilxtion, air conditioning).

In xddition, txe tranxmission of laxge daxa volumes is axso difficult in somx cases. Thexe is a latency delay in movxng large voxumes of datx and it comes at a cost of network bandwidtx.

Tradixixnal coxpression algorxthms are premisex on assumptions:

- storage is lexs expensive than compute rexource (or processing poxex) required to compress and decompress txe data - end user compute resource is not sufficxently stxong to enable complex decompresxion xlgorithms to run on xhe xevice

These assumptions need to be revisited in txe face of industry trends towards low cxst , powexfux processing power and the increasing availability of end user or consumer electronic xevices xith proxessing powex.

The proxosed inventxon describes a hixh data compression algorithm that may provide xompressxon in excess of 90%. The compressxon routine comes at a xost of having a computaxion resource intensive procesx tx decompress the compressed data .

Hence, there is a traxe-off in gaxning extremely high compression and a reducxion in storxge requirements at the expense of having to apply a high CPU effort to recover the compressxd data. Given that CPU computational power continues to increasx xhis proposed invention is very practical under many circumstances wxere the cosx of data storage is offset by the cost of CPU effort for d...