PDF version of this document

next up previous
: Perturbation Framework : Second Year Progress Report : Tolerant Similarity Measure

Specificity and Generalisation

\includegraphics[%
scale=0.32]{Journal_Graphics/Evaluation_Framework/evaluation_framework.eps}

Fig. 4. The framework of model evaluation where a model is constructed from the training and images are generated from the model. Each image is vectorised and can then be visualised as a cloud in hyperspace. The rationale behind Generalisation and Specificity is made clearer in the context of clouds where overlap of the clouds can be measured trivially.

A valuable approach to the assessment of combined model involves generation of images from that model and then a computation of how well they fit the model and vice versa. Images are created by taking the mean image from the model and selecting random values for the parameters $\mathbf{c}$, which in turn enables us to generate a large number of synthetic reconstructions from the model.

As correspondences from which the model gets built degrade (equivalent to mis-registration), the resulting model becomes less capable of reconstructing images of the same type. The model is less able to generate valid images that are not in the training set and struggles to only synthesise new images similar to those in the training set. The former property we refer to as Generalisation and the latter - Specificity. They 'tighten' the model to the data that it represents. If we decide to embed training images and model-synthesised images in a very high dimensional space, then we wish to obtain a model, represented by a synthetic images cloud, that best overlaps a cloud which is formed by the model's training set (see Fig. 4).


next up previous
: Perturbation Framework : Second Year Progress Report : Tolerant Similarity Measure
Roy Schestowitz 平成17年9月7日