Edo Liberty has submitted his Ph.D. thesis entitled: Accelerated Dense Random Projections. The slides of the talk are here. When we met at Texas A&M, I asked Justin Romberg about the relationship between the random projections used in CS and those appearing from the Johnson-Lindenstrauss Lemma. He proceeded to give an explanation similar to the one he wrote in one of his paper entitled: Compressive sensing by random convolution where in page 7 of that paper, we can read:
That cleared it up but I think I needed to see it in writing :).
In , Ailon and Chazelle propose the idea of a randomized Fourier transform followed by a random projection as a "fast Johnson-Lindenstrauss transform" (FJLT). The transform is decomposed as QF\Sigma, where Q is a sparse matrix with non-zero entries whose locations and values are chosen at random locations. They show that this matrix QF\Sigma behaves like a random waveform matrix in that with extremely high probability, it will not change the norm of an arbitrary vector too much. However, this construction requires that the number of non-zero entries in each row of Q is commensurate with the number of rows m of Q. Although ideal for dimensionality reduction of small point sets, this type of subsampling does not translate well to compressive sampling, as it would require us to randomly combine on the order of m samples of Hx_0 from arbitrary locations to form a single measurement -- taking m measurements would require on the order of m^2 samples. We show here that more structured projections, consisting either of one randomly chosen sample per row or a random combination of consecutive samples, are adequate for CS. This is in spite of the fact that our construction results in much weaker concentration than the FJLT.
There is also a larger version of Compressive Structured Light for Recovering Inhomogeneous Participating Media, by Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, and Ravi Ramamoorthi
As some of you know, I am interested in policy learning, it is happens that Umar Syed, Robert Schapire, and Michael Bowling have just released Apprenticeship Learning Using Linear Programming. (web page). The abstract reads:
The video of the ICML presentation on the same subject is here.
In apprenticeship learning, the goal is to learn a policy in a Markov decision process that is at least as good as a policy demonstrated by an expert. The difficulty arises in that the MDP's true reward function is assumed to be unknown. We show how to frame apprenticeship learning as a linear programming problem, and show that using an off-the-shelf LP solver to solve this problem results in a substantial improvement in running time over existing methods -- up to two orders of magnitude faster in our experiments. Additionally, our approach produces stationary policies, while all existing methods for apprenticeship learning output policies that are "mixed", i.e. randomized combinations of stationary policies. The technique used is general enough to convert any mixed policy to a stationary policy.
Found on arxiv, two papers:
"Real" Slepian-Wolf Codes by Bikash Kumar Dey, Sidharth Jaggi, Michael Langberg. The abstract reads:
We provide a novel achievability proof of the Slepian-Wolf theorem for i.i.d. sources over finite alphabets. We demonstrate that random codes that are linear over the real field achieve the classical Slepian-Wolf rate-region. For finite alphabets we show that typicality decoding is equivalent to solving an integer program. Minimum entropy decoding is also shown to achieve exponentially small probability of error. The techniques used may be of independent interest for code design for a wide class of information theory problems, and for the field of compressed sensing.
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models by Matthias W. Seeger and Hannes Nickisch. The abstract reads
We provide novel approximate Bayesian inference algorithms for sparse generalized linear models, that can be used with hundred thousands of variables, and run orders of magnitude faster than previous algorithms in domains where either apply. By analyzing our methods and establishing some novel convexity results, we settle a long-standing open question about variational Bayesian inference for continuous variable models: the Gaussian lower bound relaxation, which has been used previously for a range of models, is proved to be a convex optimization problem, if and only if the posterior mode is found by convex programming. Our algorithms reduce to the same computational primitives than commonly used sparse estimation methods do, but require Gaussian marginal variance estimation as well.
We are interested in Bayesian experimental design here (which is mainly driven by efficient approximate inference), a powerful framework for optimizing measurement architectures of complex signals, such as natural images. Designs optimized by our Bayesian framework strongly outperform choices advocated by compressed sensing theory, and with our novel algorithms, we can scale it up to full-size images. Immediate applications of our method lie in digital photography and medical imaging.
Photo Credit: EUMETSAT. Meteosat-8 Rapid Scan captures asteroid impact, On 7 October, asteroid 2008 TC3 hit Earth and exploded in the atmosphere over northern Sudan. Amazingly, the Meteosat-8 Rapid Scanning Service managed to capture the impact.