This first problem can be solved by learning how the parameters
(see Equation
) affect the model and how
they should be changed in order for the model to align with the image
to be interpreted. Each parameter in
has an effect
on model synthesis (both shape and intensities vary simultaneously).
By changing the value of each such parameter and learning the difference
between the model and the target image (using pixel-based comparison),
an index can be built which indicates how model parameters should
be varied to better fit the target. This type of index indicates which
parameters should be changed and, if so, in what way and to what degree.
More formally, the algorithm can be described as follows. For the
model parameters
, where
, a parameter change
(one parameter or more can be changed simultaneously)
is applied to generate new model synthesis consisting shape and texture.
This process is repeated for each mode of displacement where both
shape and intensity are changed, but only intensity difference is
learned. Sum-of-squares of the pixel differences3.5 is then used.
In [], a similar formulation is used. Shape transformation
parameters, t, are used to define point position in X,
which is the image frame of the model. The pixels in the region of
this image, g, can then be projected into the texture
frame
and the model texture
is
. The
image difference is
From this,
the image residuals can be derived. This is made possible after a
Taylor expansion that yields a convenient term to optimise over. This
term is referred to as
from this point onwards.
With this measure of intensity difference, the relation between the parameter change and this difference can be expressed conveniently. That information, which is merely a correlation, can be learned by using a pseudo-target image, such as the model in its `standard' form (the mean). It can be used as the basis for a comparison which facilitates learning about the model displacements and their corresponding effects in image space.
This quantitative measure of difference will approximate ``goodness'' of a parameter change (as indicated when calculating SSD or MSD), but not more localised effects which such a change has on the given image. This means that it will not be obvious what parts in the two entities (model and target) remain similar and which ones do not.
To address the need for localised difference measures, the parameter
change,
, which is applied to the collection of
parameters
, will be accompanied by a high-dimensional
representation of intensity differences in the image. This correlation
can be made accessible through an index whose size is proportional
to the image size. This relationship is dictated by the following:
where is a matrix3.6 that encapsulates the change in intensities due to the parameter/s
change
. This matrix is correspondent to an n-dimensional
vector. This vector expresses the change which was discovered `off-line',
i.e. prior to a search in an image. It linearly defines (in a potentially
high-dimensional space) the relationship between change to the parameters
and change to the intensities (the difference image). It can
be used to choose directions of change directly when performing
a search and thereby avoid unnecessary re-computation.
Roy Schestowitz 2010-04-05