Surety is performing system maintenance this weekend. Electronic date stamps on new Prior Art Database disclosures may be delayed.
Browse Prior Art Database

A system for robust face biometric acquisition in unconstrained environments

IP.com Disclosure Number: IPCOM000200685D
Publication Date: 2010-Oct-25
Document File: 3 page(s) / 73K

Publishing Venue

The IP.com Prior Art Database


The invention aims to use a collection of complementary sensors to robustly acquire the facial biometric signature of a subject from unconstrained environments. The collection of sensors includes LIDAR sensors and color cameras that monitor a sensing area and are distributed to effectively handle occlusions caused by other moving subjects. The information from each of the sensors is integrated to create a 3D facial model which can be used to extract a best frontal face view for processing by existing recognition systems. The proposed invention can be used for robust biometric acquisition in facilities such as airports which are busy environments with many subjects in the field of view. The invention mitigates and overcomes many of the current problems with standoff face biometrics acquisition caused by occlusions in crowded environments.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 3

Most of the current biometrics based recognition systems need high-detail imagery that is typically collected in constrained environments that put restrictions on distance, motion, illumination, pose, and the angle of acquisition. Automated recognition systems that use biometric signatures as input cannot reliably operate on images that are acquired from unconstrained and real world conditions.

Our invention focuses on high-resolution 3D face biometric signature acquisition for subjects at standoff distances under relaxed constraints even in the presence of crowds. An illustration of the sensor setup is shown in Fig 1. A collection of 3D LIDAR and 2D camera sensors are distributed around a space that is being monitored.

(This page contains 00 pictures or other non-text object)

Figure 1. A collection of complementary sensors monitoring a space and robustly acquiring a 3 D facial biometric.

The flash LIDAR sensors help the system acquire 3D information at a high refresh rate and will not be affected by specular facial regions with no texture as is typical with other ranging sensors. This 3D information will provide the geometric structure and reflectivity of the head and face regions from standoff distances. In addition, we can conceive a system in which the focal plane of the LIDAR sensor is mechanically shifted over time to provide a more dense and higher resolution scan of the scene even using low-resolution sensors. The 2D cameras provide us the intensity and color information from multiple viewpoints which is a very important attribute for face recognition systems. The color information is important for using the color of eyes, skin, hair, and other facial marks as part of the facial biometric and is complementary to the LIDAR generated biometric. A block diagram of the proposed system is shown in Figure 2. Each of the sensing streams is processed and integrated in the multimodal face consistency and integration module. The resulting 3D model has the surface geometry, the intensity and color information for the different subjects in the field of view. A snapshot of the best frontal view for each of the subjects can be extracted and fed to a standard face recognition engine for completing the biometric signature acquisition and recognition process.

Page 02 of 3

(This page contains 00 pictures or other non-text object)

Figure 2. A block diagram of the proposed integration of LIDAR and vision sensor for robust face detection.

     The 3D flash LIDAR data is first passed through a change detection engine which focuses on the moving objects in the scene. The different LIDAR sensors provide complementary views and the sensor data will be geo-registered so that all the 3D points corresponding to changed regions from the individual flash LIDARs can be combined to form a 3D model of the head. The 3D flash LIDAR data is then processed using fast 3D cueing algorithms to segment the data and generate cues for a classifier. A hierarchical class...