Face recognition using Multi-dimensional Scaling goes a reasonably long way back. The basic idea is that expressions can be treated by learning their effect on the surface of a flattened face. Each expression can then be treated using isometries, which are an area explored by others too [9]. The surface of the face is deformed to a Canonical Form using Multi-Dimensional Scaling (MDS) such that geodesic distances between the points are preserved. This helps remove the impact of expressions on the surface in a different way than the one adopted with PCA for example. There is an extension to this work, which is known as Generalised Multi-dimensional Scaling (GMDS). Bronstein et al. used variants of such a non-rigid method to tackle the face recognition problem, whereas many others stick to rigid methods which preserve the geometry of the faces as they approach the recognition problem. GMDS can also be used in a wide range of other problems, including deformation-invariant comparisons, similarity of deformable shapes with partial similarity, and correspondence of deformable shapes.
In contrast to the work of Bronstein et al., http://www.informatik.uni-trier.de/ ley/db/indices/a-tree/a/Al=Osaimi:Faisal_R=.htmlFaisal R. Al-Osaimi et al. [1] extend work on ICP to account for expressions and annul them. They propose ``An Expression Deformation Approach'' where PCA learns (so as to embody) variation induced by expressions and then warps each 3-D face onto an expression-neutral equivalent. The FRGC (v2.0) dataset is used to demonstrate good results, but some of the data used in these experiments is proprietary and we are not allowed access to it.
Figure depicts the sorts of 3-D deformations used to alter expressions in a consistent fashion. Using similar tools or methodologies, ear-based verification of identity is demonstrated in [8]. In our work we are trying to learn those methodologies and compare them to GMDS.
Roy Schestowitz 2012-07-02