PDF version of this entire document

Modelling

Generating faces with different expressions should be possible in both 3-D and in 2-D, e.g. for the sake of testing the method. Upon defining an experiment the group is able to make available a set of galleries (a couple of subjects, multi-expressions for example).

We now have a working implementation of ICP. Having spent a reasonable amount of time viewing FRGC galleries, it was hard to find anything non-neutral; that's probably by design. The first experiment currently being implemented requires pairs of the same individuals, with and without a smile (or some other expression for that matter). Then, ICP can be applied and given enough shape residuals of pointclouds (PCA won't tolerate any less than 100 instances as a very low bound for some meaningful eigenvectors), it ought to be possible to build a model, synthesise from it, and animate (videos are fun). However, whilst all the pairing code was put in place, there is no suitable data and the implementation is too slow, especially all the preprocessing. It is harder than one would imagine to reliably remove all the holes/tears/folds/spikes in every single dataset, including unseen ones (almost 5,000 of them). Once the algorithm runs sensibly - without need to backtrack and refine - making 'offline' copies of reduced sets should be a high priority; this way they can be pulled from file and used 'on the fly', just like models whose covariance matrix can be loaded instantaneously from file. A reasonable path forward might be to look at some example data, analyse accuracy/consistency/robustness and then prepare all the data for large experiments.



Subsections
Roy Schestowitz 2012-01-08