PDF version of this entire document

next up previous
Next: Comparing Registration Algorithms Up: Experimental Validation Previous: Perturbing the Registration


Measuring Sensitivity

As well as being consistent with ground truth, a good measure of registration quality should also exhibit good sensitivity (measurement accuracy). That is, it should enable us to detect small misregistrations. By evaluating sensitivity we can also assess the effect of varying the parameters of the two approaches that we investigated: the shuffle neighbourhood radius $ r$ for the model-based measures and the alternative label weighting options for the generalised Tanimoto overlap.

The size of perturbation that can be detected in the validation experiments will depend both on the change in the values of the measures as a function of misregistration and the mean error on those values. To quantify this, we define the sensitivity of a measure as follows.

$\displaystyle D(m;d) = \frac{1}{\bar{\sigma}}\left(\frac{m(d)-m(0)}{d}\right),$ (14)

where $ m(d)$ is the value of the measure for some degree of deformation $ d$, $ \overline{\sigma}$ is the mean error in the estimate of $ m$ over the range. $ D(m;d)=1$ is the change in $ d$ required for $ m(d)$ to change by one noise standard error, which indicates the lower limit of change in misregistration $ d$ which can be detected by the measure.

Figure 9: Overlap measures (with corresponding errorbars) for the brain dataset as a function of the degree of degradation of registration correspondence. The various graphs correspond to the various tissue weightings as defined in section II-B.

Figure 10: Generalisation & Specificity (with corresponding error bars) for the brain dataset as a function of the degree of degradation of the registration correspondence, and for varying definitions of image distance (varying radius of the shuffle neighbourhood).
[Generalisation] [Specificity]

We computed the sensitivity for the data shown in Figures 9, 10(a), & 10(b). The averaged sensitivity over the range of deformations is plotted in Figure 11 for the various measures. The uncertainties on the measurements of sensitivity can also be derived and are shown as error bars on Figure 11. In particular, there are two separate sources of uncertainty: i) errors associated with the finite number of deformation instantiations and ii) errors associated with the finite number of synthetic images used in the evaluation of the figure of merit for NRR. Considering (12), we can evaluate the standard errors in measured quantity $ m$ (for a given $ d$) and $ \sigma_{mi}$, $ SE_{m}$ and $ SE_{\sigma_{m}}$, analogously to (8) and (10). Using error propagation the uncertainty on the numerator (T) in (12) is the sum of standard errors on the two measurements, $ \sigma_{T}^{2}=(SE_{m(d)})^{2}+(SE_{m(0)})^{2}$ , while the uncertainty on the denominator (B) is simply $ \sigma_{B}^{2} =
SE_{\sigma_{m}}^{2}$. Using error propagation for a ratio of variables the uncertainty on the sensitivity becomes:

$\displaystyle \sigma_{D(m;d)}=D(m;d)\sqrt{\left(\frac{\sigma_T}{T}\right)^2 + \...
...(\frac{\sigma_B}{B}\right)^2-2(\frac{\sigma_{T B}}{T})(\frac{\sigma_{T B}}{B})}$ (15)

Finally, when sensitivity is aggregated across the deformation range, total uncertainty on the sensitivity, using the addition error propagation rule again becomes:

$\displaystyle \sigma_{Aggr}^2=\begin{array}{c} \\ \sum\\ j\end{array}\sigma_{D(m;d_j)}^{2}+\sigma_{D(m;d_{j+1})}^{2}-2\sigma_{D(m;d_j)}\sigma _{D(m;d_{j+1})}.$ (16)


next up previous
Next: Comparing Registration Algorithms Up: Experimental Validation Previous: Perturbing the Registration
Roy Schestowitz 2007-03-11