BOSPHORUS DATABASE 3D FACE ANALYSIS PDF

Metrics details. It inherits advantages from traditional 2D face recognition, such as the natural recognition process and a wide range of applications. Moreover, 3D face recognition systems could accurately recognize human faces even under dim lights and with variant facial positions and expressions, in such conditions 2D face recognition systems would have immense difficulty to operate. This paper summarizes the history and the most recent progresses in 3D face recognition research domain.

Author:Nigore Tygolkree
Country:Niger
Language:English (Spanish)
Genre:Business
Published (Last):27 March 2012
Pages:245
PDF File Size:19.76 Mb
ePub File Size:2.44 Mb
ISBN:564-4-14517-197-3
Downloads:92095
Price:Free* [*Free Regsitration Required]
Uploader:Groramar



Metrics details. Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization.

This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components PCs.

The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of The results demonstrate that the method is robust and in agreement with the state-of-the-art results. Emotions in human face play a remarkable role when it comes to social communication. The identification of expressions by human beings is relatively easy and accurate.

However, achieving the same result by machine remains a challenge in computer vision. Human face is the part that hosts the most crucial sensory organs. It also acts as the central interface for appearance, communication, expression and identification [ 1 ].

Therefore, acquiring its information digitally is important to researchers. This makes landmark-based geometric morphometrics methods for facial expression a new insight into patterns of biological emotion variations [ 2 ].

Many advances have been proposed in the area of acquisition of facial landmark but with several challenges especially in three-dimensional model. One of the challenges is the insufficient acquisition of 3D facial landmarks. Another challenge is the lack of homology due to manual annotation.

Whereas complex mathematical analysis has made many works un-reproducible in 3D facial landmark acquisition. The use of three-dimensional face images in morphometrics does not only give room to cover a wider area of human facial region but also retains all the geometric information of the object descriptors [ 3 , 4 ].

In modality comparison, 3D face has higher detection rate than that of 2D due to its higher intensity modality [ 5 ]. Furthermore, during subjection to systematically increasing pitch and yaw rotation experiment performed in [ 6 ], there was a dropped in expression recognition performance in 2D while that of 3D remained constant.

This is as a result of occlusion effects substantial distortion in out-of-plane rotations. More so, in the area of feature transformation and classification, 3D modality shows a little improvement with higher confidence over 2D. But in terms of depth features, both show the same performance; and the cost of 3D model in terms of processing is higher than that of 2D [ 5 ]. We developed an approach for 3D facial landmark using multi-points warping.

This approach has extended the computational deformation processing in [ 7 ] to improve the annotation performance using a less complex pipeline. Due to the easy detection, pose correction [ 8 ] and invariant to facial expression of nose tip [ 9 ], Pronasale was selected as the most robust and prominent landmark point. Since the nose tip area can be approximated as a semi-sphere of the human face. This determines the location where the sliding points begin to spread across the facial surface.

We have validated the usability of our approach through its application to soft-tissue facial expression recognition in 3D. By using PCA for feature selection, we classify six expressions on both datasets. So far, to the best of our knowledge, sliding semi-landmark approach to facial landmarking has not been applied to solve problem relating to soft-tissue facial expression recognition in 3D.

Section one of this study focuses on the introduction, section two discusses the related studies. In section three, the implementation of the methodology is presented with supporting references where short explanation has been provided. Section four discusses the results of the implementations. In section five, a more detailed discussion is presented for the clarification of the result and comparison with state-of-the-art methods.

The last section concludes the study and presents the limitations and future direction. Morphometrics is the study of shape variation and its covariation with other variables [ 7 , 11 ].

According to DC Adams, et al. But advances in morphometrics have shifted focus to the Cartesian coordinates of anatomical points that might be used to define more traditional measurements.

Morphometrics examines shape variation, group differences in shape, the central tendency of shape, and associations of shape with extrinsic factors [ 13 ]. This is directly based on the digitized x,y, z -coordinate positions of landmarks, points representing the spatial positions of putatively homologous structures in two or three dimensions; whereas conventional morphometric studies utilize distances as variables [ 7 , 11 , 14 ].

The landmark was described in LF Marcus, et al. This set of points, one on each form, are operationally defined on an individual by local anatomical features and must be consistent with some hypothesis of biological homology.

But the formal landmark definitions were provided by anthropometric studies in [ 16 ]. This work by LG Farkas [ 16 ] has been provided as the standard for head and face landmark definitions through the study of thousands of subjects from different races.

These have produced a large number of anthropometric studies in the head and face regions. This ensures that the corresponding points of the starting and target form appear precisely in corresponding positions in relation to the transformed and untransformed grids [ 18 ].

With the application of Iterative Closest Point ICP , landmark correspondence can iteratively be registered in the vicinity of a landmark with a re-weighted error function. Morphometrically, some studies have been proposed which computed localization errors of facial landmarks on Bosphorus dataset. Each sample contains 24 manually annotated facial landmarks.

While the PDM includes 33 landmarks and 14 of them are part of the ground truth set tested on Bosphorus database. An automatic method for facial landmark localization relying on geometrical properties of 3D facial surface was proposed in [ 20 ], working on complete faces displaying different emotions and in presence of occlusions. The method extracts the landmark one-by-one. While the geometrical condition remains unchanged, the method double-checks to ascertain whether pronasale, nasion and alare are correctly localized, otherwise the process starts afresh.

The method is deterministic and is backboned by a thresholding technique designed by studying the behavior of each geometrical descriptor in correspondence to the locus of each landmark, experimented on Bosphorus database.

This approach generates landmarks that are spatially homologous after sliding [ 23 ] which may be optimized by minimizing bending energy [ 24 , 25 ] or Procrustes distance [ 26 , 27 ]. Since sliding semi-landmarks have not been implemented in analysing facial expression for soft-tissue in 3D, we have decided to investigate the expression recognition using the application of multi-points warping approach.

Emotion or expression recognition using facial analysis has been the current trend in computer vision but the diversity of human facial expression has made the emotion recognition somehow difficult [ 28 ]. Moreover, asides unidentifiable lighting challenges, the fairly significant differences in age, skin colour and appearance of individual placed additional burden on machine learning. When face subjects are transformed into feature vectors, any classifier can be used for expression recognition such as neural network, support vector machines, random forest, linear discriminant analysis, etc.

But the uniqueness is the application of facial image information [ 29 ]. Due to the sensitivity of the change in head posture and illumination, the use of static 2D image is unstable for expression recognition. The use of 3D does not only play safe in the area of illumination and pose change but also enables the use of more image information.

This is because facial expressions are generated by facial muscle contractions. It results in temporary facial deformations in both texture and facial geometry which is detectable in 3D and 4D [ 30 ]. The same successes achieved in 3D face recognition could still be naturally adopted for expression recognition [ 31 ]. According to M Pantic and LJ Rothkrantz [ 32 ] on facial expression analyser, facial expression follows the general properties for solving computer vision problems: face detection, landmark localisation, recognition or classification.

As 3D databases are becoming more and more available in the computer vision community, different methods are being proposed to tackle the challenges facing facial expression recognition. Most of these studies are based on six fundamental expression classes or less: anger, fear, disgust, sadness, happiness, and surprise [ 33 ]. Many also focus on the use of local features which retrieves the topological and geometrical properties of the face expression [ 29 , 34 ].

Linear discriminant analysis and many other classifiers have been used for classification in many face expression recognitions. A learn sparse features from spatio-temporal local cuboids extracted from human face was proposed in [ 35 ]. This has application of conditional random field classifiers for training and testing the model. In H Tang and TS Huang [ 36 ], similar distance feature was explored using automatic feature selection technique. This was done by maximizing the average relative entropy of marginalized class-conditional feature distributions.

Using 83 landmarks, less than 30 features were selected. The features distance are subtracted from the features of the expressive scan on the neutral scan which they classified by Naive Bayes, Neural network and Linear Discriminant Analysis on BU-3DFE dataset. To approximate the continuous surface at each vertex of an input mesh, YL Wang Jun, Wei Xiaozhou, Sun Yi [ 6 ] proposed a cubic-order polynomial functions.

It estimated coefficient at a particular vertex, formed the weingarten matrix for the local surface path. The eigenvectors and eigenvalues of the matrix could be derived by normal direction along the gradient magnitude. The facial region was described using 64 landmarks to overcome the lack of correspondence between the meshes.

Their best performance was obtained using LDA; no rigid transformation is required due to the geometrical invariance of curvature-based features. To deal with issue of deformation of facial geometry which results from expression changes, C Li and A Barreto [ 37 ] proposed a framework that is composed of three subsystems: expressional face recognition system, neutral face recognition system and expression recognition system.

This was tested on 30 subjects and was classified using LDA, but used only two expression groups. H Li, et al. To account average for reconstruction error of probe face descriptors, multi-task sparse representation algorithm was used.

The approach was evaluated on Bosphorus database for expression recognition, pose invariant and occlusion. The method captured distinguishing traits on the face by extracting 3D key-points. Similarity expression on faces was evaluated by comparing local shape descriptors across inlier pairs of matching key-points between gallery scans and probe.

The method was experimented on six databases including Bosphorus which achieved a promising result on occlusions, pose variation and expressions. A 3D face augmentation technique was proposed in [ 41 ], which synthesizes a number of different facial expressions from a single 3D face scan. A novel geometric framework for analysing 3D faces was proposed in [ 42 ] with the goals of averaging face shapes and comparing matching.

Furthermore, in order to address the issue of 2D counterpart and the handling of large intra-class and inter-class variability for human facial expression, W Hariri, et al.

EVERMOTION ARCHMODELS 136 PDF

3D face recognition: a survey

To browse Academia. Skip to main content. By using our site, you agree to our collection of information through the use of cookies. To learn more, view our Privacy Policy. Log In Sign Up. Bosphorus Database for 3D Face Analysis

BALON DE CONTRAPULSACION INTRAAORTICA PDF

3-Dimensional facial expression recognition in human using multi-points warping

A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis. Unable to display preview. Download preview PDF.

CJX2 0910 PDF

Bosphorus Database for 3D Face Analysis

.

Related Articles