Next: Introduction
A Generic Method for Evaluating Appearance Models
First Author
Institution1
Institution1 address
firstauthor@i1.org
-
Second Author
Institution2
First line of institution2 address
http://www.author.org/~second
Abstract:
Generative models of appearance have been studied extensively as a
basis for image interpretation by synthesis. Typically, these
models are learnt from sets of training images and are statistical
in nature. Different methods of representation and training have
been proposed, but little attention has been paid to evaluating
the resulting models. We propose a method of evaluation that is
independent of the form of model, relying only on the generative
property. We define the specificity and
generalisation ability of a model in terms of distances
between synthetic images generated by the model and those in the
training set. We validate the approach, using Active Appearance
Models (AAMs) of face and brain images, and show that specificity
and generalisation degrade monotonically as the models are
progressively degraded. We compare two different inter-image
distance metrics, and show that shuffle distance performs
better than Euclidean distance. We then compare three
different automatic methods of constructing appearance models, and
show that we can detect significant differences between them.
Finally, we contend that model construction is analogous to the task of non-rigid registration. The former requires correspondence across images, whereas the other attain to find that correspondence. We then compare our method against another method that is based on ground truth and show that both are in tight agreement.
Next: Introduction
Roy Schestowitz
2005-11-17