The framework for the first experiment - one which builds the expressions
model - is now more or less complete, although there are some deficiencies.
To name problems which need to be overcome:
- Despite compensating for GIP/FRGC anomalies such as distance units,
scale, noise levels, etc. (support for both datasets is needed for
future experiments, where one builds a model and another has it validated
and then assessed with an unseen dataset), the hair in the GIP dataset
still poses an issue/difficulty. Although the vaguely-described nose
tip identification method was implemented (it locates a peak by slicing
the image horizontally and then considering tip candidate by measuring
intersection of perpendicular line with a sphere), it remains hard
to always find the nose without false positives. Smoothing or other
filters - median-based for the most part - are very localised, so
they cannot reliably eliminate false signal which resembles a nose
and lacking nose recognition which is reliable, ICP rigidly/affinely
registers non-correspondent parts. For non-rigid methods that taken
into account dense data in its entirety see, e.g. [,,].
- The loaders of GIP datasets extract what appears to be unaligned sequences
of images, where one image does not overlap its predecessor because
there is an imperfectly-sliced stream of them. To annul this effect
and not be confused by misclassifications-causing cruft, additional
steps are made necessary. These preparatory steps are intended to
remove distractions and biases as it is necessary to have a guesstimate
of scale inside the image (for parameter setting), not just nose location
(otherwise cropping might fail). It is clear that the majority of
the time so far was spent dealing with these issues rather than addressing
the more novel parts, notably decomposition and expression expression
(not a typo) in a lower-dimensional space.
Roy Schestowitz
2012-01-08