Correspondences describe the relationship between shapes. In Chapter 3, it was shown how a shape model can be built once each shape has been annotated with a set of corresponding landmarks. Up to this point, shapes were assumed to be represented by a finite number of landmarks that were being treated separately, but this section presents the means by which shapes are parameterised and landmark are therefore sampled from a potentially infinite number of points. In their 1998 paper [], Kotcheff and Taylor shared encouraging results where they explained how changing the pose and parameterisation of each shape gradually optimised an objective function where the determinant itself was optimised by a genetic algorithm. This led to results which were provably better than ones produced using manual labour. The use of a genetic algorithm at the time suffered from the lack of a defensible approach for re-parameterisation. What this section presents is a solution to it which also moves on from landmark points to parameterised shapes.
To alter the correspondence between a set of shapes, one needs to find a method of moving the landmark points, and moving them in such a way so that they remain on the shape. The problem of sliding points was studied extensively by another group [,,] and the simplest way to achieve the goals is by building a parameterised representation of shape.
Taking as an example the simplest case of open-ended aligned shapes in 2-D, let the initial parameter be considered for each shape the fractional distance along the shape (arc-length parameterisation). This then gives a parameter , which varies from
at one end of the shape to
at the other end.
For the shape in our training set, at a given parameter value
, one gets the position in real space of that point on the shape, which is denoted by
, where
is now the shape function for the
shape.
Correspondence between these continuous shapes can now be defined by stating that points at the same parameter value correspond. That is:
![]() |
(4.18) |
This can now be used to manipulate correspondence. It is possible to re-parameterise only the shape, so that:
![]() |
(4.19) |
where
is then the re-parameterisation function for this shape. The shape function now also changes, although the shape itself of course stays the same. A new re-parameterised shape function
is defined where:
![]() |
(4.20) |
The physical point on the shape remains unchanged, but its parameter value changes from to
, and the shape function changes accordingly. Since correspondence between shapes is defined for points of the same parameter value, this now also changes, from
, to
.
The re-parameterisation function for the shape,
must obey certain conditions if it is to be valid. The values of the new parameter
must still lie between
and
, each point on the shape should have a unique parameter value, and each parameter value should correspond to a single point on the shape. In mathematical terms,
has to map the interval
onto itself in a one-to-one fashion, and is hence a monotonic function on this interval.
Figure 4.3 shows an example of the construction of a monotonic function using piecewise-linear recursive re-parameterisation. Figure 4.4 shows the 2-D parameter space for shapes in 3-D with the topology of the sphere. The points indicate how points move under re-parameterisation.
![]()
|
Suitable re-parameterisation functions can also be built up in ways other than by using linear interpolation. One approach is to use as the re-parameterisation function the cumulative distribution function of some normalised probability density function (pdf), which is then guaranteed to be monotonic and lie within the required range.
As in methods of kernel density estimation, a suitable pdf can be built by adding kernels, where the positions, widths, and amplitudes of the kernels are the parameters that control the re-parameterisation. This can be justified by considering the observation that a single kernel is not sufficient in terms of flexibility, so compositions need to be explored.
![]()
|
For example, Cauchy kernels [] can be used, each kernel then having the 3 parameters of amplitude, width, and position. Other kernels that have been used include the von Mises distribution, cardioids, or the most obvious choice, Gaussian kernels. Such kernels then obviously lend themselves to a coarse-to-fine approach, where one starts with a small number of widely-spaced kernels, and gradually adds more finely-spaced kernels as the optimisation proceeds.
Roy Schestowitz 2010-04-05