Various methods have been proposed for assessing the results of NRR [9,11,16,15]. Most of these require access to some form of ground truth. One approach involves the construction of artificial test data, which limits application to `off-line' evaluation. Other methods can be applied directly to real data, but require that anatomical ground truth be provided, typically involving annotation by an expert. This makes validation expensive and prone to subjective error.
We present two methods for assessing the performance of non-rigid registration algorithms; one requires ground truth to be provided a priori, whereas the other does not. We compare the two approaches by systematically varying the quality of registration of a set of MR images of the brain.