Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method for Extracting Facial Features by using Color Information

IP.com Disclosure Number: IPCOM000116605D
Original Publication Date: 1995-Oct-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 4 page(s) / 111K

Publishing Venue

IBM

Related People

Kurokawa, M: AUTHOR [+2]

Abstract

Disclosed is a method for extracting facial features, such as the positions of eyes, mouth, nose, chin and top of the head from a facial image. An image of a human face is captured by a color TV camera and transmitted to the computer as an RGB image. The positions of facial features are extracted automatically, in the following three steps: 1. The facial region is detected by using color segmentation. 2. Candidates for eyes and mouth parts are extracted by using color distance. 3. Facial parts are identified by using combination of rules.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Method for Extracting Facial Features by using Color Information

      Disclosed is a method for extracting facial features, such as
the positions of eyes, mouth, nose, chin and top of the head from a
facial image.  An image of a human face is captured by a color TV
camera and transmitted to the computer as an RGB image.  The
positions of facial features are extracted automatically, in the
following three steps:
  1.  The facial region is detected by using color segmentation.
  2.  Candidates for eyes and mouth parts are extracted by using
color
       distance.
  3.  Facial parts are identified by using combination of rules.

As a pre-process, the captured facial image is smoothed and
translated into three color space images such as Lu*v*, HSV and YIQ.
The following three steps are then applied to these images.
  STEP-1:Detection of the facial region

One of the skin colored regions is determined to be the facial
region.  For this purpose, the Lu*v* image is segmented by using
region-growing method.  After the removal of the small regions, the
mean Hue values of the remaining regions are calculated, and those
whose values do not lie within a previously determined interval (-10
< H < 90) are eliminated.  The remaining  regions are combined to
form the region pairs, and  a  convex hull is created for each region
pair.  For each region pair and its convex hull, the following two
tests are applied,
    (a)Sp/Sc > Th1
  where Sp is the total area of the region pair and Sc is the area of
the convex hull.

(b) Correlation value of the left and right halves of the hull along
with their central axis  >  Th2 where Th1 and Th2 are predetermined
threshold values and the central axis is a vertical line which passes
through the center of gravity of the region pair.  The region pair
that meets the above two conditions, and have a maximum area value is
determined to be the facial region, and its convex hull is used as
the contour of the face.
  STEP-2:Extraction of candidate for facial parts.
  Next the candidate regions for facial parts are extracted.

(a) The three parameters such as FHmin, FHMax, and FVmin, which
denotes the  minimum and maximum values of the facial hue and the
minimum facial intensity are determined as follows:
  FHmin = &mu. - &alpha. * &sigma.
  FHmax = &mu. + &alpha. * &sigma.
  FVmin = &beta.  * &mu.
  where &mu.,&sigma.  denote the mean and standard deviation of the
corresponding color of the face.

(b) Any pixel in the face contour that  does not take a value between
the FHmin and FHmax, and whose intensity value is greater than FVmin,
is regarded as candidate facial part.  The candidate facial parts
are gathered by connected component labelling to form the candidate
regions of facial parts.

(c) The edge density of each candidate region of facial parts is
calculated and regions with high edge d...