PDF version of this entire document

Feature-based Methods

Methods that are based on mutual information or MSD have become popular for image registration, but underlying image features are rarely accounted for. Instead of considering the structure of images as complete entities, it is possible to assimilate portions of image features, such as lines, intersections of lines, boundaries, corners, curves, and particular points of greater significance to one another []. Feature-based methods assume that intensity alone is not sufficient for actual features to be matched, so one approach to take is to use feature images, which may also contain the original intensity images. Broersen and van Liere [] adopted this approach to align data-cubes, Dare and Dowman applied this to satellite images [], and in a recent paper, Lee et al. [] proposed a Gaussian-weighted distance map for alignment of features, such as those which they acquire using automatic segmentation. Sometimes it is also reasonable to even combine the two approaches, namely intensity- and feature-based approaches. To give a practical example, Zhang and Rangarajan use the original image along with 3 directional derivative feature images [].

McLaughlin et al. have studied and discussed the difference between registration that is based on intensity and registration that is based on other image features []. They found the intensity-based approach that they tested to be better in terms of accuracy. In the image registration survey from Zitova and Flusser [], feature-based registration methods were described and the authors subdivided feature matching methods further into methods that use spatial relations, invariant descriptors (where the the simplest description is the intensity), others that are based on a relaxation approach, and also pyramids or wavelets. Maintz and Viergever [] published a similar type of survey specifically for medical imaging.

Extraction of features from the intensity images can be done separately, e.g. using image segmentation 2.6, or obtained using some assumptions about the imaged object. In some cases, features can be identified by considering other modalities where features in the images are more pronounced.

To use an actual example, some algorithm that pays attention to relevant geometric primitives inside images can calculate distances. It may use these distances to form a measure of dissimilarity while not accounting for raw image intensities at all, despite the fact that there is often a relationship between intensity and underlying features.

In cases of multimodality, feature-based methods can be quite robust and fast, whereas using intensity alone it becomes difficult to get decent results (without costly algorithms that may involve mutual information). There are drawbacks to this approach however because point- or curve-based NRR techniques might lack automation. They sometimes require user input in order to be consistent and reliable. They can also be negatively affected by poor detection and selection of edges, or be misguided by noise, to which they are arguably less resistant than area-based registration methods. The obtained results are only as good as the initial segmentation of the features.

There are other advantages which tend to be associated with feature-based methods. Using intensity information, all parts of the image contribute to the mapping, whereas in practice, what sometimes means a lot more is the mapping between actual features regardless of relative intensities.

Roy Schestowitz 2010-04-05