PDF version of this entire document

Weighting

The purpose of this work was to study the effect of choosing one particular value of $\mathbf{W}_{s}$, which was explained in Chapter 3. As a start, I performed 10 repeated experiments with 10 different random instantiations of 10 bump images comprising 200 sample points (pixels) on each. These were registered using the model-based objective function and 200 iterations were run to refine the model.

The goal was to demonstrate that by changing the trade-off between shape and intensity (relative scaling) in the construction of the model, not only are different models obtained at the end, but the NRR too is improved or degraded. In order to show that one weighting scheme trumps another, I consider the correct solution - one which results in perfect alignment. This solution is studied based on the synthetic bump at the start, which is eventually compared to the bump at the end of NRR.

20 modes of variation were considered when computing the determinant and 98% of the variation kept when applying PCA.

The experiments made use as their input 10 non-identical starting points with 10 different datasets that were imported from files. I performed the experiments with these 10 different datasets and at the very end of the registration measured:

Model score is the value of the objective function at the end of NRR. It was also useful to measure the variance computed for each mode of variation (I consider 20 modes in total). As NRR progresses, I find that intensity variance is decreased (bumps get better aligned), whereas it is typically increased for shape because I have different warps applied to each bump.

These variances are good reflections of the modes of the resultant model, broken down into shape and intensity (a combined one depends on weighting).

Figure [*] and Figure [*] show the results from this first batch of 10 NRR experiments which study shape and intensity weighting. On the $\mathbf{x}$ axis, scaling is shown where 100% means only intensity is accounted for, whereas 0% means that only shape variation is accounted for.

It is possible to see that, in terms of the objective function (model score), a low value is reached when shape and intensity are considered in isolation. This can be explained by the variation of shape and intensity. When only shape is accounted for (0%), then it's optimised to the point where its final variance is low, whereas it is high for the intensity component. Conversely, when intensity is optimised, then the images reach good alignment, whereas the shape (warp) component becomes more irregular.

Quite interesting is the dip in the graph which shows distance from the correct solution - the fact that for a certain weighting (around 25%), I get the least error. Unlike the model score graph, it seems to vary quite smoothly with weighting.

Figure: -left: final model score as a function of shape/intensity weighting (lower is better); bottom-left: mean of shape variance; right: distance from the correct solution (less is better).
Image experiment-figs-1-4

Figure: Mean of intensity variance as a function of shape/intensity weighting (lower is better)
Image experiment-figs-5-6

Roy Schestowitz 2010-04-05