In the midst of experimentation, peculiar results were found when measurements of generalisation ability and specificity had been taken. The following is an extensive survey of these.
Specificity generates a number of random examples from the model and measures their distance with respect to the original set. Hence, it can be thought of as a measure of compactness. As the error bars in Figure suggest, there is an improvement in the spread of these values (they spread shrinks in size) when the model-based objective function is employed, whereas it is not clear how the specificity rises when an MSD-minimising objective function is used.
Regarding the generalisability, there never appears to be a radical change in their range of values or mean values when the model-based objective function is applied. Likewise was the case for all other objective functions (e.g. Figure ) so it appears as if it can be discarded as a measure of improvement. Only Figure suggests that generalisability measures are of some use. However, generalisability measure very slowly changes.
A curious observation is that the model-based objective function may have its value decreasing at the start, yet no apparent improvement can be seen in the form of specificity. Figure shows the steady value of specificity during the first 100 iterations of this model-driven algorithm. The model improves, but specificity does not. A similar story is said by generalisability (see Figure ), but given the explanation above, it is not at all an unpredictable result.
More figures on generalisation ability are included for the realisation that it is a poor measure in the case of model-based registration. it should therefore not be pursued much further unless the algorithms alter.
|
|
|