The ability to represent a function at hierarchical levels of resolution, or detail, and then being able to examine it at any level. The highest resolution is the complete function, and lower levels have lesser and lesser detail, and are coarser. One way to do this is using wavelets. This idea is getting very popular in computer graphics, to represent curves and surfaces, because it is useful to render them at various levels of detail.

Another interesting thing, is that every MRA can be generated by the use of recursive subdivision, which is a quite cool idea, where you take a polygon, or even a polyhedron, and split each edge at its midpoint, and then move the vertices around. The surprising thing is that if you do this process correctly you can get all sorts of nice (or weird) looking curves out of it :)

Kind of a neat really, perhaps even a topic for a Masters thesis.

A Multiresolution Analysis (MRA) of L2(R) is a nested family of subspaces,

... ⊆ V-1 ⊆ V0  ⊆ V1 ⊆ V2 ⊆ ...

such that

i) If f(x) is in Vn then f(2x) is in Vn+1,

ii) V0 contains integer translations of its elements, (i.e. if f(x) is in V0, then so are f(x-k) for any integer k.)

iii) There is a function φ(x) in V0, called the scaling function, which "generates" the space V0 - in the sense that, any function in V0 can be expressed as a linear combination of the functions {φ(2x-k)} (for integer k).

iv) The intersection of the spaces {Vi} is equal to {0}.

v) The union of the spaces {Vi} is dense in L2{R).


Because of (i) and (ii), as you go higher up in the spaces Vn, you will capture "finer detail".

As a consequence of (iii), φ itself can be expressed as a linear combination of compressed translations. i.e. φ(x) = Σk sk φ(2x-k) for some real (or complex) coefficients {sk}. These are called the scaling coefficients. A function satisfying such an identity is often called self similar. The equation is similar to, but not quite, a finite difference equation.


If the scaling function φ is orthogonal, meaning that the Linner product <φ(x-k), φ(x-j)> = ∫ φ(x-k) φ(x-j) dx = δjk , this is sometimes called an orthogonal MRA. If the scaling function is compactly supported, there will be finitely many non-zero scaling coefficients, and vice versa. Often times the scaling function will be continuous, or of finite support, or twice differentiable, in order to grant the MRA nice features.


In this orthogonal case, one could define the wavelet space Wn as the orthogonal complement of Vn within Vn+1. A wavelet is formed (as a linear combination of the {φ(2x-k)}) by using the wavelet coefficients, and these are defined by reversing the order of the scaling coefficients and alternating the sign on each coefficient. (wk = (-1)k s1-k). This wavelet, being an element of W0, will be orthogonal (in the L2 sense) to all of V0, as it is orthogonal to the scaling function φ(x) which generates V0.


A good first example of an orthogonal scaling function, despite not being continuous, is the Haar function. This function is the identity (equal to 1) over the interval [0, 1], and it is equal to 0 elsewhere. The scaling coefficients would be {1, 1}, because given the Haar function as p(x), we would have p(x) = p(2x) + p(2x-1).

The Haar wavelet would use the wavelet coefficients {1, -1}, it would be defined as ψ(x) = p(2x) - p(2x-1), and it would be equal to one over [0, 1/2], and equal to negative one over [1/2, 1]. (You can see how integrating this times the Haar function would give you zero - they are orthogonal.) 

A more interesting example, though not orthogonal, is the "tent" function T(x), the piecewise linear function increasing from 0 to 1 over the interval [-1, 0], then decreasing from 1 to 0 over the interval [0, 1]. The scaling coefficients in this case are {1/2, 1, 1/2}, as the function T(x) = 1/2 T(2x+1) + T(2x) + 1/2 T(2x-1).

Log in or registerto write something here or to contact authors.