Pages

Tuesday, January 02, 2007

Nothing short of a revolution. Part deux: Pursuing a dream


With the wavelet bang came another realization and then something odd happened. In 1994, Donoho and Chen published a paper that summarized the state of the affairs on approximation:

- The realization that now, with wavelets of all kinds being added daily on top of the old approximating polynomials, there are too many functions to be used to decompose a signal and there is no good way in figuring out which one is the best to use. The situation is so bad that you can write an entire dictionnary of different wavelet names.

- Some signals are a composition of various functions and distributions (diracs,...), so it does not make sense to decompose them with respect to only one family of function. If you use only one type of decomposing function, you will not get a sparse decomposition. Take for instance the case of Gibbs phenomenon where one tries to approximate a dirac with a bunch of sines and cosines.

- Most decomposition algorithms are based on a least square approach. In other words, the criterion with which one decides whether a series of approximation has converged in based on a $L_2$ distance or the euclidian distance. The definition of a scalar product is therefore essential but no one knows why some scalar products work better than other. In the end, many formulation of a variety of engineering problems rely on so-called weak formulation such as the finite element method but sometimes, no one knows really why they should be using these methods. Case in point, albeit a non traditional one: Neutron and Radiation Transport. There are many weak formulation of the linear transport equation even though we know that it has distribution as eigenfuctions. Yet neither the scalar product of the weak formulation ($L_2$) nor the one induced by the scalar product of the eigendistribution can constrain any of the solution to be positive (an initial requirement) or have a direct physical meaning.

- If the scalar product is induced by the decomposing family of function, then how can one try to decompose a function simultaneously along different set of functions ?

- The $L_2$ distance criterion never converges toward a local approximation. The approximation will have many coefficients that are very close to each other. There will be no way to apply a blind threshold to these coefficients because each and everyone of them count in the minimization of the criterion.

- The problem that you have always wanted to solve is an $L_0$ problem not an $L_2$. The $L_0$ is the criterion is really about the sparsity of the approximation.

- $L_0$ problem is NP-complete so there is no way you can figure a solution in your lifetime on average.


Donoho and Chen showed that if you were to solve an $L_1$ problem instead of an $L_0$, the solution was likely very close to the $L_1$. Solving an $L_1$ problem is akin to performing linear programming or an optimization. Because of its over reliance on least square approach, the whole engineering field could be shattered as a result. Yet, the method by which one can obtain a solution are significantly time consuming at that time that it was not likely to make a dent in the weak formulation business anytime soon. The lingering thought is still that it will eventually prevail and change our views on how to solve engineering problems in the future. At some point, even Google was using some code word related to the $L_1$ approach to find future employees :-). There needs to be other stepping stones to reach the final climax of this story though.

No comments:

Post a Comment