PDF version of this entire document

next up previous
Next: Model-Based Evaluation of NRR Up: Background Previous: Overlap-Based Methods


Statistical Models of Appearance

Our approach to ground-truth-free evaluation of NRR depends on the ability, given a set of registered images, to construct a generative statistical model of appearance. We have adopted the approach of Cootes et al [9,10], who introduced models that capture variation in both shape and texture (in the graphics sense). These have been used extensively in medical image analysis in, for example, brain morphometry and cardiac time-series analysis [11,12,13]. Other approaches to appearance modelling could also be considered as we rely only on the generative property of such models in this application.

Figure: The effect of varying the first (top row), second, and third parameter of a brain appearance model by 13#13 standard deviations
14#14

The key requirement in building an appearance model from a set of images, is the existence of a dense correspondence across the set. This is often defined by interpolating between the correspondences of a limited number of user-defined landmarks. Shape variation is then represented in terms of the motions of these sets of landmark points. Using the notation of Cootes et al [9], the shape (configuration of landmark points) of a single example can be represented as a vector 15#15 formed by concatenating the coordinates of the positions of all the landmark points for that example. The texture is represented by a vector 16#16, formed by concatenating image values (texture) sampled over a regular grid on the registered image. This means that the a given element in 16#16 is sampled from an equivalent point in each image, assuming the registration is correct.

In the simplest case, we model the variation of shape and texture in terms of multivariate gaussian distributions, using Principal Component Analysis (PCA) [15] to obtain linear statistical models of the form:

17#17 18#18 19#19  
20#20     (2)
21#21 18#18 22#22 (3)

where 23#23 are shape parameters, 24#24 are texture parameters, 25#25 and 26#26 are the mean shape and texture, and 27#27 and 28#28 are the principal modes of shape and texture variation respectively.

In generative mode, the input shape ( 23#23) and texture ( 24#24) parameters can be varied continuously, allowing the generation of sets of images whose statistical distribution matches that of the training set.

In many cases, the variations of shape and texture are correlated. If this correlation is taken into account, we obtain a combined statistical model of the more general form:

17#17 18#18 29#29  
20#20     (4)
21#21 18#18 30#30 (5)

where the model parameters 31#31 control both shape and texture, and 32#32, 33#33 are matrices describing the general modes of variation derived from the training set. The effect of varying different elements of 31#31 for a model built from a set of 2D MR brain images is shown in Figure [*]. The number of modes (columns) in 32#32 and 33#33 is one less than the number of images. In practice, it is often possible to approximate images well, using fewer modes 34#34.

Generally, we wish to distinguish between the meaningful shape variation of the objects under consideration, and the apparent variation in shape that is due to the positioning of the object within the image (the pose of the imaged object). In this case, the appearance model is generated from an (affinely) aligned set of images. Point positions 35#35 in the original image frame are then obtained by applying the relevant pose transformation 36#36:

37#37 (6)

where 38#38 are the points in the model frame, and 39#39 are the pose parameters. For example, in 2D, 40#40 could be a similarity transform with four parameters describing the translation, rotation and scale of the object.

In an analogous manner, we can also normalise the image set with respect to the mean image intensities and image variance,

41#41 (7)

where 42#42 consists of a shift and scaling of the image intensities. For further implementation details see [9,10].

As noted above, a meaningful, dense, groupwise correspondence is required before an appearance model can be built. NRR provides a natural method of obtaining such a correspondence, as noted by Frangi and Rueckert [11,12]. It is this link that forms the basis of our new approach to NRR evaluation.

The link between registration and modelling is further exploited in the Minimum Description Length (MDL) [16] approach to groupwise NRR, where modelling becomes an integral part of the registration process. This is one of the registration strategies evaluated in this paper.


next up previous
Next: Model-Based Evaluation of NRR Up: Background Previous: Overlap-Based Methods
Roy Schestowitz 2007-03-11