PDF version of this document

next up previous contents index
Next: Different Registration Approaches Up: Initial Exploration Previous: Similarity Measures   Contents   Index

Generalisation and Specificity

In the midst of experimentation, peculiar results were found when measurements of generalisation ability and specificity had been taken. The following is an extensive survey of these.

Specificity generates a number of random examples from the model and measures their distance with respect to the original set. Hence, it can be thought of as a measure of compactness. As the error bars in Figure [*] suggest, there is an improvement in the spread of these values (they spread shrinks in size) when the model-based objective function is employed, whereas it is not clear how the specificity rises when an MSD-minimising objective function is used.

Figure: Specificity rising when MSD-based registration is performed.

Figure: Specificity of model-based objective function.

Regarding the generalisability, there never appears to be a radical change in their range of values or mean values when the model-based objective function is applied. Likewise was the case for all other objective functions (e.g. Figure [*]) so it appears as if it can be discarded as a measure of improvement. Only Figure [*] suggests that generalisability measures are of some use. However, generalisability measure very slowly changes.

Figure: Generalisation ability of model-based objective function.

Figure: Specificity of the model-based objective function as registration proceeds.

A curious observation is that the model-based objective function may have its value decreasing at the start, yet no apparent improvement can be seen in the form of specificity. Figure [*] shows the steady value of specificity during the first 100 iterations of this model-driven algorithm. The model improves, but specificity does not. A similar story is said by generalisability (see Figure [*]), but given the explanation above, it is not at all an unpredictable result.

More figures on generalisation ability are included for the realisation that it is a poor measure in the case of model-based registration. it should therefore not be pursued much further unless the algorithms alter.

Figure: Generalisation ability of the model-based objective function as registration proceeds.

Figure: Specificity of the MSD objective function as registration proceeds.

Figure: Generalisation ability of the MSD objective function as registration proceeds.

Figure: Specificity shown to be less erratic as the algorithm proceeds with registration.

Figure: For MSD, generalisation slowly declines as shown for 2,000 iteration. Measurements are made every 100 iterations.

Figure: Specificity is merely unchanged as registration proceeds, unlike what is expected. It can be seen however, that there is a decline at the start where changes to the data are most radical.

Figure: Generalisation ability measured every 100 warps. A total number of 10,000 iterations shows no substantial change to values while registration is performed.


next up previous contents index
Next: Different Registration Approaches Up: Initial Exploration Previous: Similarity Measures   Contents   Index
2004-08-02