Pages

Friday, October 17, 2008

CS: Euclidean sections of l^N_1 and EC over the reals, Post-doc, CCA, CC, Renyi Entropy, IHT, l_1 norm sparse Bayesian learning

Terry Tao mentions Compressed Sensing in his latest entry on Small samples, and the margin of error. I guess it must have to do with the concept of concentration of measure.

Venkatesan Guruswami, James R. Lee, and Avi Wigderson just released Euclidean sections of l^N_1 with sublinear randomness and error-correction over the reals. The abstract reads:

It is well-known that R^N has subspaces of dimension proportional to N on which the l_1 and l_2 norms are uniformly equivalent, but it is unknown how to construct them explicitly. We show that, for any |delta above 0, such a subspace can be generated using only N^(\delta) random bits. This improves over previous constructions of Artstein-Avidan and Milman, and of Lovett and Sodin, which require O(N logN), and O(N) random bits, respectively. Such subspaces are known to also yield error-correcting codes over the reals and compressed sensing matrices. Our subspaces are de¯ned by the kernel of a relatively sparse matrix (with at most N^(\delta) non-zero entries per row), and thus enable compressed sensing in near-linear O(N^(1+\delta) ) time. As in the work of Guruswami, Lee, and Razborov, our construction is the continuous analog of a Tanner code, and makes use of expander graphs to impose a collection of local linear constraints on vectors in the subspace. Our analysis is able to achieve uniform equivalence of the l_1 and l_2 norms (independent of the dimension). It has parallels to iterative decoding of Tanner codes, and leads to an analogous near-linear time algorithm for error-correction over reals.


The SISYPH (Signals, Systems and Physics) research group, at the Physics Department of Ecole Normale Superieure de Lyon, offers a 12-months post doctoral position, starting any time after December, 1st 2008. One of the subject of interestof the group is Compressed Sensing. More information can be found here. This job was added to the CS Job list.

Yves Lucet just released the Computation Convex Analysis Library. The succint description reads:

The Computational Convex Analysis numerical library contains efficient algorithms to manipulate convex functions, and to compute fundamental convex analysis transforms. The library is coded using Scilab version 4.1.2 (available here), a numerical software freely available.

List of functions available can be found here. Let us note that Scilab can map directly into Matlab.

On top of the abstract, Ping Li also provided a summary to the SODA conference as to how his latest Compressed Counting paper brings new insight:

Our contributions include:

1) Compressed Counting (CC) is the first proposal of using skewed projections in data stream computations. We also show that maximally-skewed projections can achieve the best performance.

2) CC is the first algorithm which captures the intuition that, when $\alpha =1$ a simple counter suffices, and when $\alpha = 1\pm\Delta$ with small $\Delta$, the sample complexity should be low. We show the sample complexity of a proposed estimator $= \frac{G}{\epsilon^2}\log\left(\frac{2}{\delta}\right)$, where $G= O\left(\epsilon\right)$ as $\Delta\rightarrow 0$. In other words, for small $\Delta$, CC has the complexity $=O\left({1}/{\epsilon}\right)$. Previous results only achieved $O\left(1/\epsilon^2\right)$ (even for $\alpha =1$), which in general is expected and can not be improved, according to the central limit theorem.

3) We show, when $\alpha = 1\pm\Delta$, the asymptotic variance of a proposed estimator is $\propto \Delta$. As $\alpha \rightarrow 1$, CC achieves ``an infinite improvement'' over the previous results, in terms of the asymptotic variances.

4) Our results (especially when $\alpha=1\pm\Delta\approx 1$) have important applications. In some situation, $\Delta$ might be interpreted as the ``decay rate'' or ``interest rate,'' which is usually small. The method of moments is popular for statistical inference, which requires computing frequency moments. Also, some important summary statistics such as Renyi entropy and Tsallis entropy are functions of frequency moments and hence CC naturally provides an efficient algorithm to compute them. Very importantly, CC may serve the basic building element for developing other algorithms, for example, approximating the Shannon entropy using Renyi entropy and Tsallis entropy with $\alpha\approx 1$.

4) Our results enrich the fundamental theory of skewed stable distributions.
Talking about Renyi entropy, Jean-Francois Berger provides us with an understanding of the Renyi entropy in 'Possible rationales for Renyi-Tsallis entropy maximization. The abstract of the paper reads:

Distributions derived from the maximization of Renyi-Tsallis entropy are often called Tsallis’ distributions. We first indicate that these distributions can arise as mixtures, and can be interpreted as the solution of a standard maximum entropy problem with fluctuating constraints. Considering that Tsallis’ distributions appear for systems with displaced or fluctuating equilibriums, we show that they can be derived in a standard maximum entropy setting, taking into account a constraint that displace the standard equilibrium and introduce a balance between several distributions. In this setting, the Renyi entropy arises as the underlying entropy.
Another interest of Tsallis distributions, in many physical systems, is that they can exhibit heavy-tails and model power-law phenomena. We note that Tsallis’ distributions are similar to Generalized Pareto distributions, which are widely used for modeling the tail of distributions, and appear as the limit distribution of excesses over a threshold. This suggests that they can arise in many contexts if the system at hand or the measurement device introduces some threshold. We draw a possible asymptotic connection with the solution of maximum entropy. This view gives a possible interpretation for the ubiquity of Tsallis’ (GPD) distributions in applications and an argument in support to the use of Renyi-Tsallis entropies.

Thomas Blumensath and Mike Davies have just released their ICASSP '09 talk featuring their Iterated Hard Thresholding solver in A Simple, Efficient and Near Optimal Algorithm for Compressed Sensing by Thomas Blumensath and Mike Davies. The abstract reads:
When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing, which developed conditions on the sampling system and proved the performance of several efficient algorithms for signal reconstruction under these conditions. In this paper, we prove that a very simple and efficient algorithm, known as Iterative Hard Thresholding, has near optimal performance guarantees rivalling those derived for other state of the art approaches.

Finally, the Ph.D thesis of Yuanqing Lin entitled l(1)-norm sparse Bayesian learning: Theory and applications is now available.

Image Credit: NASA/JPL/Space Science Institute, view of Enceladus from the Cassini spacecraft, photo taken on october 9th, 2008.

No comments:

Post a Comment