PDF version of this entire document

next up previous
Next: Evaluation Method Up: Background Previous: Assessment of Non-Rigid Registration

Statistical Models of Appearance

There are many approaches to building statistical models of the appearance variation of objects which encompass the variation of both shape and texture that underlies such appearance variation. In particular, we use the generative appearance models as introduced by Cootes et al.  [#!Cootes_ECCV_1998!#,#!Edwards!#]. They have been applied extensively in medical image analysis [#!Frangi!#,#!Rueckert!#,#!Stegmann!#], among other related domains, and successfully applied to brain morphometry, and also to the time-series analysis of cardiac data (e.g., [#!Stegmann_cardiac!#]).

The construction of such an appearance model from a set of images depends on the existence of a dense spatial correspondence across the set. In many manual or semi-automatic methods of model building, this dense correspondence is extrapolated and interpolated from the correspondence of some set of anatomically or user-relevant landmark points. In the automatic method that will be used here, the dense correspondence is given directly as the output of the NRR algorithm. Hence the relevant landmark positions in this case are in effect as dense as the pixels/voxels in the images registered.

Figure: The effect of varying the first (top row), second, and third model parameters of a brain appearance model by 12#12 standard deviations
13#13

In either case, the shape variation is represented in terms of the motions of these sets of landmark points. Using the notation of Cootes [#!Cootes_ECCV_1998!#], the shape (configuration of landmark points) of a single example can be represented as a vector 14#14 formed by concatenating the coordinates of the positions of all the landmark points for that example. The texture is represented by a vector 15#15, formed by concatenating the image values for the shape-free texture sampled from the image.

In the simplest case, we model the variation of shape and texture in terms of multivariate gaussian distributions, using Principal Component Analysis (PCA) [#!pca_joliffe!#]. We hence obtain linear statistical models of the form:

16#16 17#17 18#18  
19#19 17#17 20#20 (2)

where 21#21 are shape parameters, 22#22 are texture parameters, 23#23 and 24#24 are the mean shape and texture, and 25#25 and 26#26 are the principal modes of shape and texture variation respectively.

In generative mode, the input shape ( 21#21) and ( 22#22) texture parameters can be varied continuously, allowing the generation of sets of images whose statistical distribution matches that of the model we have constructed.

In many cases, the variations of shape and texture are correlated. If this correlation is taken into account, we then obtain a combined statistical model of the more general form:

16#16 17#17 27#27  
19#19 17#17 28#28 (3)

where the model parameters 29#29 control both shape and texture, and 30#30, 31#31 are matrices describing the general modes of variation derived from the training set. The effect of varying one element of 29#29 for a model built from a set of 2D MR brain image is shown in Fig. [*].

In many cases, we wish to distinguish between the meaningful shape variation of the objects under consideration, and that apparent variation in shape that is due to the positioning of the object within the image (the pose of the imaged object). In that case, the appearance model is generated from an (affinely) aligned set of images. Point positions 32#32 in the original image frame are then obtained by applying the relevant pose transformation 33#33:

34#34 (4)

where 35#35 are the points in the model frame, and 36#36 are the pose parameters. For example, in 2D, 37#37 could be a similarity transform with four parameters describing the translation, rotation and scale of the object.

In an analogous manner, we can also normalise the image set with respect to the mean image intensities and image variance,

38#38 (5)

where 39#39 consists of a shift and scaling of the image intensities.

For further details as regards the exact implementation of appearance models, see [#!Cootes_ECCV_1998!#,#!Edwards!#].

As noted above, a meaningful dense groupwise correspondence is required before an appearance model can be built. One way to obtain such a correspondence is by extrapolating from expert annotation. However, this annotation process is extremely time-consuming and subjective, particularly for 3D data.

The output of groupwise NRR is such a correspondence, hence it was a natural next step to build automatic statistical models using the results of NRR algorithms [#!Frangi!#,#!Rueckert!#].

This link between registration and modelling is further exploited in the Minimum Description Length (MDL) [#!IPMI_2005_ISBE!#] algorithm for non-rigid registration, where modelling becomes an integral part of the registration process. This latter work will be one of the registration strategies used later in this paper.


next up previous
Next: Evaluation Method Up: Background Previous: Assessment of Non-Rigid Registration
Roy Schestowitz 2007-03-11