Statistical models of shape and appearance (combined appearance models) were introduced by Cootes, Edwards, Lanitis and Taylor [2,3,9]. They have been applied extensively in medical image analysis [11,17,24], among other related domains. Brain morphometry has been one main point of focus while cardiac imaging incorporated a third and fourth dimension, which is time series [23].
The construction of an appearance model depends on establishing a dense correspondence across a training set of images. That correspondence is defined using a set of landmark points marked consistently on each training image. Landmark points are often prominent anatomical positions, which can easily be identified as they lies on stronger edges. Moreover, they have meaningful properties such as being markers of a boundary of an organ, or as in our case - a brain compartment or the skull.
|
Using the notation of Cootes [3], the shape (configuration of landmark points) can be represented as a vector and the texture (intensity values) represented as a vector . The two vectors are formed by simple concatenation of values, either intensity (usually grayscale) values or geometric position of landmark points in the image. Using Principal Component Analysis (PCA) [13], the variation in terms of shape and texture can be learned and decomposed. The shape and texture are controlled by a linear statistical models of the form
(2) |
where are shape parameters, are texture parameters, and are the mean shape and texture, and and are the principal modes of shape and texture variation respectively. By varying and , the image produced by the model can be altered.
Since shape and texture are often correlated, we can take this into account by applying yet another stage that involves PCA. We then obtain a combined statistical model (encapsulating both shape and intensity) of the form
(3) |
where the model parameters control the shape and texture simultaneously and , are matrices describing the modes of variation derived from the training set. The effect of varying one element of for a model built from a set of 2D MR brain image is shown in Fig. 1.
To generate the positions of points in an image we use
(4) |
where are the points in the model frame, are the points in the image, and applies a global transformation with parameters . For instance, in 2D, is commonly a similarity transform with four parameters describing the translation, rotation and scale.
The texture in the image frame is generated by applying a scaling and offset to the intensities, where is the vector of transformation parameters.