Browse Prior Art Database

Method and Apparatus for smart intermediate result placement of Map Reduce system

IP.com Disclosure Number: IPCOM000246191D
Publication Date: 2016-May-16
Document File: 4 page(s) / 73K

Publishing Venue

The IP.com Prior Art Database

Abstract

The disclosure discloses a hybrid system including a scale out and a scale up cluster, which aims to accelerate computation in mapreduce framework. The advanced JobTracker arranges mapreduce task according to the estimation of its input/output data size and how critical it is. The scheduler tends to put critical tasks into scale up system in order to reduce the lantcy of whole mapreduce job execution if the tasks' output/input can be accommodated in the centrialized storage.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 53% of the total text.

Page 01 of 4

Method and Apparatus for smart intermediate result placement of Map Reduce system

Pipelined and iterative MapReduce are widely used in SQL-over-Hadoop and Machine Learning. For such workloads, storing/fetching the intermediate data to/from DFS(distributed file system) is so inefficient as to slow down the whole data processing.

There are some prior arts like DAG tried to alleviate the I/O stress by storing the intermediate data into local disks instead of DFS. But that method still can't speed up the data processing for a complex piped workload.

Map Reduce and Scale Out Storage are comparative mature technology for BigData Analytics. However, there is no practical way to accelerate the computation on key path only through scaling up a portion of the system.


1. Computation is evenly distributed across the system - scale up either nothing or entire system

2. Inter node parallel computation on a large cluster works not well with certain workloads. Idle nodes appear during long running tasks so that it causes long latency and resource waste.

The invention points in this disclosure are

#1. A hybrid system to separate non-accelerable computation and accelerable computation
The hybrid system consists of 2 interconnected parts: scale out system and scale up system.

Scale out system is responsible for mass data storage and initial computation.

Scale up system is responsible for last mile accelerable computation.

A centralized storage system provides high-speed data storing and fetching. The storage system is connected to both scale out and scale up system.

#2. An Advanced JobTracker (AJT) to plan tasks cross 2 systems
A Pro Forma (PF) to estimate and send out the workload output size

A Task Planner (TP) to plan computation not only based on data source(size), but on destination and critical path as well.

AJT is responsible for dispatching MapReduce jobs to scale out or scale up system according to their priorities and output sizes . The key value of the AJT is to accelerate the pipelined MapReduce jobs running as well as to yield a high throughput for the whole hybrid system by intelligent scheduling.

Detail d...