A technique to metamorphose one image into another, first presented by Thaddeus Beier and Shawn Neely in their paper, Feature-Based Image Metamorphosis, at Siggraph '92.

This method requires the animator to specify control lines in the source image, and to specify the locations of these lines in the destination image. Each line has a field of influence, hence this technique is also known as field morphing.

(Labelling conventions:
UPPERCASE = pixel coordinates
pairs of UPPERCASE = lines
unprimed values = destination values
primed(') values = source values)

To calculate the destination image, we perform a reverse mapping. This is a process whereby, for each pixel in the destination image, we find the pixel in the source image that would map to it under some transformation, and copy its colour value. We employ a reverse mapping since we are trying to create the destination image from nothing and this ensures every destination pixel is assigned some value. A forward mapping may leave some destination pixels unassigned.

To calculate a morph for a single control line, for each destination pixel, we find its location relative to the control line. This location consists of 2 values, u, which is the projection of the point onto the line (where values of 0 < u < 1 and on the line, and u < 0 and u > 1 are off the ends), and v, the perpendicular distance in pixels of the point from the line.

We then find the pixel in the source image that is in the same position relative to the line and set the destination pixel with its colour value. The pseudocode for this is:

for each pixel X in the destination image
  find the corresponding u,v 
  find the X' in the source image for that u,v 
  destinationImage(X) = sourceImage(X')
The equations used to calculate these values are:
u = (X - P).(Q - P)
      || Q - P ||

v = (X - P).Perpendicular(Q - P)
           || Q - P ||

X' = P' + u.(Q' - P') + v.Perpendicular(Q' - P')
                             || Q' - P' ||
where Perpendicular() returns a perpendicular vector. There are two possible vectors, either can be used as long as the choice is consistent.

Using a single control line will allow simple affine transformations only. To make a more interesting transformation, we need multiple control lines. To do this, we obtain a value Xi' for each control line, i, and find its displacement, Di = Xi' - X. We find a weighted average ot these values to get the final value.

The weight is calculated with the formula:

weight =    length   b
         ( -------- )
           a + dist
where length is the length of the control line, dist is the shortest distance from the point to the line and a, b and p an parameters to control the warp.
dist is abs(v) if 0 < u < 1, distance from P if u < 1 and distance from Q if u > 1.
If a is close to zero, points under the line will map to points under the line. This gives greater control over the location of the destination pixels. Greater values reduce control but give smoother results.
b determines how the effect of a line falls off with distance.
If p is 0, all lines have the same weight, if it is higher, longer lines have more weight.

The pseudocode is:

for each pixel X in the destination
  DSUM = (0,0)
  weightsum = 0 
  for each line Pi Qi
    calculate u,v based on Pi Qi
    calculate X'i based on u,v and Pi'Qi'
    calculate displacement Di = Xi' - Xi for this line
    dist = shortest distance from X to Pi Qi                
    weight = (lengthp / (a + dist))b
    DSUM += Di *  weight 
    weightsum += weight 
  X' = X + DSUM / weightsum
  destinationImage(X) = sourceImage(X')
This technique gives the animator direct control over the location of features in each frame of an animation. It is relatively trivial to extend this to cross-dissolve between two images over a series of intermediate images to produce a seamless morph.

Original paper is at http://www.hammerhead.com/thad/morph.html

Log in or register to write something here or to contact authors.