Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method and System for Fast and Efficient Data Transmission Across a Network

IP.com Disclosure Number: IPCOM000250241D
Publication Date: 2017-Jun-16
Document File: 4 page(s) / 190K

Publishing Venue

The IP.com Prior Art Database

Abstract

A method and system is disclosed to speed up the data transmission across a network using on-the-fly compression and creating the compressed data units in target object format before shipping the compressed data units across the network.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 46% of the total text.

1

Method and System for Fast and Efficient Data Transmission Across a Network

Data movement of large volumes of data in the range of 100 Giga Bytes (GB) to 100s of Tera Bytes (TB) across the network to a target entity on a remote system has always been a big challenge and a slow process. This has posed a lot of issues during migration of data from one data center to another across the network and during adoption/migration of large volume of data from on premise to cloud infrastructure.

Existing methods to transfer large volume of data across the network compress the entire source data on the local system using compression tools and ships the entire compressed file across the network to the target system. The target system, then, decompresses the data and ingests the data into the entity in the target system. On the other hand, for really large volumes of data ranging from 10s and 100s of TBs, the disk is shipped via courier services to the destination data center.

Both of the approaches have several inefficiencies and disadvantages. Firstly, the source data is compressed into a form that is not native to the target entity which requires un-compress and recompress of the data into native compression form before ingesting data into the target entity. Secondly, the system has to wait for compression of entire source data to complete, before the data is shipped out which induces latency of the data transfer. Finally, additional storage space is required on the source and target system to hold compressed file on the source system and un-compressed data on the target system before ingesting it. This increases additional storage requirements and overall cost.

Thus, there exists a need for a method and system that addresses the above mentioned disadvantages.

Disclosed is a method and system to speed up the data transmission across a network using on-the-fly compression and creating the compressed data units in target object format before shipping the compressed data units across the network.

The data received is then written to a persistent store or in-memory and processed without any intermediate transformation.

The method and system decouples the load analyzed, creates a dictionary and provides actual loading of data into the client and server side. This process is further described in detail as follows.

FIG. 1 illustrates an architecture of the method and system in accordance with an embodiment.

2

Figure 1

As illustrated in FIG. 1, the source system or client side includes the load analyze phase and dictionary creation. The dictionary is created at the source system (on premise or another data center) and the dictionary is shipped to a target store on the cloud. This is the dictionary for the target table on the server. Once the dictionary is created, an on- the-fly compression is performed and the compressed pages are created in target object format.

Thereafter, the compressed pages are shipped across the Wide Area Network (WAN) to the target sys...