Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

System and Method for Scalable Face Recognition and record linkage using Hadoop

IP.com Disclosure Number: IPCOM000246359D
Publication Date: 2016-Jun-02
Document File: 3 page(s) / 138K

Publishing Venue

The IP.com Prior Art Database

Abstract

Big data technologies provides the architecture design to provide the scalability, stability, response time etc. solutions for the growing database systems and warehouses. Also, face matching has always been a challenge in various scenarios. This system aims to bring in a fusion of biometric and biographic data in the big data landscape for de-duplication and identification of the unique person which uses the most commonly available camera devices for the face matching. While many solutions exists, this implementation is the first on the big data landscape. Also, the underlying big data handling, analysis and machine learning concepts can be implemented in various live fields of automation and security to extract the value out of the large volume and variety of growing data.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 36% of the total text.

Page 01 of 3

System and Method for Scalable Face Recognition and record linkage using Hadoop

Big data technologies provides the architecture design to provide the scalability, stability,  response time etc. solutions for the growing database systems and warehouses. Also, face  matching has always been a challenge in various scenarios. This system aims to bring in a fusion  of biometric and biographic data in the big data landscape for de­duplication and identification  of the unique person which uses the most commonly available camera devices for the face  matching. While many solutions exists, this implementation is the first on the big data landscape.  Also, the underlying big data handling, analysis and machine learning concepts can be  implemented in various live fields of automation and security to extract the value out of the large  volume and variety of growing data.

In this article, we will show How the map reduce algorithms on Hbase records using distributed  Hadoop environment is a way to implement our own face recognition algorithms on the large  data sets to find the search result and the linked entities efficiently and accurately. Further, how  it is solving the scalability, reliability, heterogeneity, time varying data etc. problems with this  systems.

Detailed Description

Now we will answer each of the above questions in detail. Let's consider the first question.


1. What is the underlying architecture of this system?

The system is based upon the distributed computing environment which uses the map reduce  functionality of Hadoop/Hbase cluster to reduce the query time over the large data sets. Apache  Hadoop includes the MapReduce processing framework and Hbase and HBase is the  Hadoop­based database that stores data in tables that are non­relational and distributed across  nodes. In this architecture NameNode keeps the directory tree of all the files in the  HDFS(hadoop distributed file system)  and keep track of Data Nodes across the cluster where  the data insertion through data nodes is facilitated by the resource manager. Here, the data is  replicated among Data Nodes which is actually happens at disks for the reliability purpose. So  the distributed nature of this system allows us to query over many system in parallel where this  system gives the services to Zookeeper and Hbase Master to load the framework metadata and  it's configuration. The component also loads the process configuration on the HBase Region  Server and into the JVMs for MapReduce. After the components are configured it runs the  application as automatic background JobTracke...