Pages

Saturday, April 30, 2011

A billion smartphones



Back in 2008, the beginning of this video at ECTV'08 ( Computational Photography: Epsilon to Coded Imaging ) showed Ramesh Raskar highlighting that we then now had 1 billion cameras sold every year from near zeros fifteen years earlier. He also mentioned that back then the highest growth market was that of lower resolution chips being used in your computer mouse. That was 2008. Three years later, Vladimir Koifman reminds us that more than 1 billion cameras will be sold in phones this year. Out of that market, one wonders how many were iPhones (i.e. they are all the same imaging chip roughly). I found the answer on the interwebs:



This total includes all iPhone models (original, 3G, and 3GS) is based on Apple's announcements and numbers are approximate. I'll update this figure whenever Apple reveals new numbers.
Total iPhones Sold Worldwide
March 2011: 108 million
January 2011: 90 million
April 2010: 50 million
January 2010: 42.4 million
January 2009: 17.3 million
January 2008: 3.7 million




So the question we should be asking ourselves is what can we do with 1 billion GPS-internet-enabled cameras ?



I am not the only one asking, Robert Pless has the same nagging feeling, from his research page:

Questions that have recently kept me awake at night include:
  • Re-purposing sensors: "What if we could use all the worlds webcams as a coherent imaging sensor?"
  • Collaborative Imaging: "What if everyones' camera phone served as an environmental imaging resource?"
  • Health@Home: "What if we can measure (indicators of) health from sensors already in the home?"

Could there be a new business model where you'd be paid by allowing some company to mine your family or Flickr photos (with gps and time information) for weather related information ? What else can you do with a billion gps and internet enabled cameras ?

Of related interest: Image Based Geolocalization and Sensor Network

Friday, April 29, 2011

MathBlogging.org


For those of you who are interested in Math, here is MathBlogging.org. I go there every so often to be surprised and I usually am.

The beauty of a blog

Here is Bob Sturm's thoughtful take on it in Why Blog Research ?. Go read it, I'll wait....... Using the blog as a personal database is the reason I link directly to the papers and the people writing them and it is also the reason I share my real time experiments. A side effect of this is that Bob and David's name now show up on the first page of Google when searching for their names. In the end, it is a huge time saver for all of us (notwithstanding the inept search capability Google provides to the blog), however I recently read another good summary of a reason in Social media in science by John Hawks:

Science is a realm in which many highly motivated and smart people are competing for a limited number of jobs. There are many ways to put your work forward, and blogging can be one of them. I never discount that the biggest factor is luck. But 90 percent of luck is standing in the right place at the right time. The beauty of a blog is that it's standing there waiting all the time for the right person to look.
I see this all the time in the counters logs, and I am always astonished at how specific threads are being revisited many years after they were written. Unlike most other science blogs, Nuit Blanche features unpublished works from a highly competitive community. The peer review shows up in the comment section or by email directly to me or the authors.  Yes, forget the social media blablah...what you are slowly witnessing is Science in the making. The blog is a sure way to show that your work will sustain the test of time. 

If like others you are considering blogging, you might want to read "You aren't blogging yet?" by Bora Zivkovic and Jonathan Eisen.

CS: Concentration Week on "Greedy Algorithms in Banach spaces and Compressed Sensing" at Texas A&M University

Forget about what Donoho said ("You’ve got Terry Tao talking to geoscientists, what do you want?") to the NSF folks about compressive sensing crossing over and being a bridge between many different worlds. The following meeting is hosted at Texas A&M University but the site of the meeting is hosted at University of Texas. WOW, forget about today's wedding, the last flight of Endeavor, this is news.

The concentration week on "Greedy Algorithms in Banach spaces and Compressed Sensing" at Texas A&M University is part of the Workshop in Analysis and Probability at Texas A&M from July 5 to August 5. The week will occur July 18-22, 2011.

From the description:


When encoding or reconstructing a vector using an iterative algorithm, a natural approach is to take the best or biggest approximation at each iteration. Such techniques are referred to as greedy algorithms. The theory of greedy algorithms, developed largely over the past 20 years, has had significant connections with Banach spaces, approximation theory, and signal processing. Of particular note is the applications of greedy algorithms to compressed sensing. The theory of compressed sensing is concerned with encoding or reconstructing vectors which are sparsely represented with respect to a given basis. In other words, given a basis for a high dimensional Hilbert space, how does one identify a vector which is a linear combination of only a small number of basis elements? This concentration week will hold introductory lectures on these topics, as well as presentations of new results at the frontiers of research. Some topics will be on greedy algorithms, some on compressed sensing, and some on the interplay between the two areas.
The following invited speakers will present a series of 3 talks:
Steven Dilworth (3 introductory talks on greedy algorithms and bases)
Kevin Ford (1 introductory talk and 2 advanced talks on explicit construction of RIP matrices)
Przemyslaw Wojtaszczyk (3 talks on compressed sensing)
Vladimir Temlyakov (3 talks on greedy algorithms)

The following invited speakers will each present a talk:
Fernando Albiac
Smbat Gogyan
Eugenio Hernandez
Anna Kamont
Boris Kashin
Denka Kutzarova
Evgenii Livshits
Maria de Navitade
Tanmoy Paul
Krzysztof Smela
Rachel Ward
Andras Zsak
More speakers will be listed once they are confirmed. There will be several time slots available for talks. If you would like to give a talk please contact the organizers.
This Concentration week is part of the Workshop in Analysis and Probability at Texas A&M July 5 to August 5. Participants are strongly encouraged to attend other workshop events. In particular the Summer Informal Regional Functional Analysis Seminar (SUMIRFAS) will be held July 29-July 31, and the Concentration Week on "Non-Linear Geometry of Banach Spaces, Geometric Group Theory, and Differentiability" will be held August 1-5. There will also be talks scheduled in for the workshop in the days between concentration weeks.

Can Compressive Sensing Help Us Understand the Mass of the Universe.?

The week-end is approaching, it's time for thought provoking ideas. As I was watching the following movie from PhD comic it reminded me of the issue of gravitational lensing and how it fits with compressive sensing. If you want to do something in that direction, you may want to compete in the Star Challenge of the GIANT10 challenge (by the way I am trying to put all competitions and challenges in one place). From the movies, it is said that dark matter occupies 20% of the mass in the universe and normal matter fits only 5% while the remaining is unknown. Since the only way to visualize dark matter is through its lensing effect on light shone by normal matter and since what we see of matter is through these random assemblies of dark matter, how can general principles of compressive sensing help us understand the general distribution of mass in the universe ? The idea is that dark matter acts as a measurement matrix and we only get to see measurements. This is a typical blind deconvolution problem except if you are making inferences on the solution which at 5% mass looks to me like a small (sparse) quantity.

One more thing, today the last flight of Endeavor (STS-134) will put the Alpha Magnetic Spectrometer (AMS) on the truss of the International Space Station. The AMS will the first device outside earth atmosphere dedicated to detect dark matter. This is bittersweet to me personally since for a while we tried to get projects running on the truss of the ISS next to the AMS on the now defunct Express Pallets. Oh well.


Dark Matters from PHD Comics on Vimeo.

CS: Sparse Recovery for Earth Mover Distance, High-Dimensional Statistical, Compressive Network Analysis recovery, MRA With Distributed CS, Weighted l_p Constraints in Noisy CS, Primal-Dual TV Reconstruction in Refractive Deflectometry

Here are a few papers for the week-end, enjoy!


We initiate the study of sparse recovery problems under the Earth-Mover Distance (EMD). Specifically, we design a distribution over m x n matrices A such that for any x, given Ax, we can recover a k-sparse approximation to x under the EMD distance. One construction yields m = O(k log(n/k)) and a 1 + epsilon approximation factor, which matches the best achievable bound for other error measures, such as the L_1 norm. Our algorithms are obtained by exploiting novel connections to other problems and areas, such as streaming algorithms for k-median clustering and model-based compressive sensing. We also provide novel algorithms and results for the latter problems.

Many statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that projected gradient descent has a globally geometric rate of convergence up to the \emph{statistical precision} of the model, meaning the typical distance between the true unknown parameter $\theta^*$ and an optimal solution $\hat{\theta}$. This result is substantially sharper than previous convergence results, which yielded sublinear convergence, or linear convergence only up to the noise level. Our analysis applies to a wide range of $M$-estimators and statistical models, including sparse linear regression using Lasso ($\ell_1$-regularized regression); group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition. Overall, our analysis reveals interesting connections between statistical precision and computational efficiency in high-dimensional estimation.
Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets.

Accelerated Noncontrast-Enhanced Pulmonary Vein MRA With Distributed Compressed Sensing by Mehmet Akcakaya, Peng Hu, Michael L. Chuang, Thomas H. Hauser, Long H. Ngo, Warren J. Manning, Vahid Tarokh, and Reza Nezafat. The introduction reads:
Purpose: To investigate the efficacy of distributed compressed sensing (CS) to accelerate free-breathing, electrocardiogram (ECG)-triggered noncontrast pulmonary vein (PV) magnetic resonance angiography (MRA).


Weighted l_p Constraints in Noisy Compressed Sensing by Laurent Jacques, David Kenrik Hammond and Jalal Fadili.


Primal-Dual TV Reconstruction in Refractive Deflectometry by Adriana Gonzalez, Laurent Jacques, Emmanuel Foumouo and Philippe Antoine. The inroduction reads:

Refractive deflectometry is a tomographic modality that measure light ray deflection when passing through transparent objects [1]. Combining multiple parallel light rays under various incident angles allows one to image the internal refractive-index distribution (or map) of complex materials (like optical fibers) while escaping from some limitations of interferometric systems (e.g., unstability to object vibrations, thickness measurement range).


Here are some presentations that took place a while back:
General Constructions of Sampling Matrices for Compressed SensingApril 15, 2011, 12-1 PM
Arya Mazumdar
Affiliation: University of Maryland
Underwater Sensor Networks: Random Access and Compressive SensingProfessor Milica Stojanovic
Northeastern University
DateTime:
Wednesday, April 27, 2011 - 10:00 - 12:00
Credit photo: ESA/NASA, Einstein Rings

PhD studentship: The learning speech interface

Jort Gemmeke, a reader and contributor to Nuit Blanche, sent me the following:

Hi Igor,
I wondered if you could add the following PhD position to your website for me? The research will focus on creating algorithms for self-learning & adaptive speech recognition, featuring a healthy dose of sparse representations, non-negative matrix factorization and related methods.
http://phd.kuleuven.be/set/voorstellen_departement?departement=50000516#detail
Thanks!
Jort
Sure Jort , Here is the announcement:


Junior researcher - The learning speech interface

Promoter: Hugo Van hamme

Description: A funded PhD position is vacant at the Centre for Processing of Speech and Images of K.U.Leuven, Belgium in the context of the ALADIN project (http://www.esat.kuleuven.be/psi/spraak/projects/?proj=ALADIN).
The aim of ALADIN is to build a learning and adapting vocal  human-machine interface for controlling home appliances, games and personal assistants for users with a physical impairment. The interface should learn what the vocal characteristics of the user are, which words he/she uses and what he/she means with the spoken commands. Users can formulate commands in any way they like, using the words they like and only addressing the functionality they are interested in. Learning takes place by using the device, i.e., by mining the vocal commands and the change they provoke in the device. You will design adaptive learning strategies for acquiring and maintaining the user's vocabulary and their association to machine actions. To this end, you will refine modern machine learning techniques such as sparse coding, non-negative matrix factorization and spectral clustering. You will work in a multidisciplinary team of junior and senior researchers in machine learning, speech processing,
signal processing, user interface design and interest groups for physically impaired users.
Qualifications
Candidates should ideally have a Master or equivalent degree in engineering or computer science. Candidates with a math or physics degree and excellent programming skills may apply as well. Previous experience in speech recognition is not required but knowledge of or
experience in any the following areas form an asset:
* speech recognition and speech modelling
* programming experience in Matlab and Python
* strong mathematical and statistical background

Key words: machine learning, speech recognition, assistive technology
Latest application date: 2011-07-19
Financing: available
Type of Position: scholarship
Source of Funding: IWT
Duration of the Project : 4 years
Research group: Department of Electrical Engineering (ESAT)

Thursday, April 28, 2011

CS: How well can we estimate a sparse vector?, A view on MMV solvers.

Mark is at it again, here is his new paper:  How well can we estimate a sparse vector? by Emmanuel J. Candes and Mark A. Davenport. The abstract reads:
The estimation of a sparse vector in the linear model is a fundamental problem in signal processing, statistics, and compressive sensing. This paper establishes a lower bound on the mean-squared error, which holds regardless of the sensing/design matrix being used and regardless of the estimation procedure. This lower bound very nearly matches the known upper bound one gets by taking a random projection of the sparse vector followed by an l_1 estimation procedure such as the Dantzig selector. In this sense, compressive sensing techniques cannot essentially be improved.


On the LinkedIn Compressive Sensing forum there was a question about trying to avoid the vectorization of a 2-D image to solve a CS problem. Among the answers, was that of Zhilin Zhang, here it is:

Your problem is the MMV model [1] if the support of each column in X is the same. There are many algorithms for the MMV model. Basically, these existing algorithms largely ignore the correlation among each column of X (inter-vector correlation), and thus have poor performance when such correlation is high. Recently, I have developed several algorithms in the framework of Sparse Bayesian Learning (SBL) [2], which effectively exploit such correlation information and achieve excellent performance. I compared almost all the MMV algorithms in terms of recovery performance, but no one was better than mine (including the compressive MUSIC suggested by Igor). You can download my manuscript here:http://arxiv.org/abs/1102.3949.

But admittedly, Bayesian algorithms are much slower than non-Bayesian algorithms. If you seek fast speed, you may consider non-Bayesian algorithms, such as the compressed MUSIC.

Note that algorithms for the basic MMV model are basically using the summarized row norms of X as penalty (except to mine). This is due to the common sparsity assumption [1]. However, if the support of each column in X is slowly changing, you may want to consider the dynamic compressed sensing model [2]. There are some algorithms. However, even for this model, you can apply the MMV algorithms, as shown in my blog:


If the support of each column in X is largely different (the overlapping of the support of neighboring columns is very small), while X is really sparse (very few nonzero elements), the above two models may not be useful. You may look at other extended MMV models, such as using the trace of X, hierarchy of norms, or something else (depends on the structured sparsity in X) as penalties.

[1] SF Cotter, BD Rao, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE Trans. on Signal Processing, 2005

[2] Z.Zhang, B.D.Rao, Sparse Signal Recovery with Temporally Correlated Source Vectors Using Sparse Bayesian Learning Submitted to IEEE Journal of Selected Topics in Signal Processing, [arXiv:1102.3949]

[3] Namrata Vaswani, "LS-CS-residual (LS-CS): Compressive Sensing on the Least Squares Residual", IEEE Trans. Signal Processing, August 2010

Wednesday, April 27, 2011

CS: The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding vs. Dynamic Range

Mark Davenport tweeted about his new paper on Saturday it is entitled:

Compressive sensing (CS) exploits the sparsity present in many common signals to reduce the number of measurements needed for digital acquisition. With this reduction would come, in theory, commensurate reductions in the size, weight, power consumption, and/or monetary cost of both signal sensors and any associated communication links. This paper examines the use of CS in the design of a wideband radio receiver in a noisy environment. We formulate the problem statement for such a receiver and establish a reasonable set of requirements that a receiver should meet to be practically useful. We then evaluate the performance of a CS-based receiver in two ways: via a theoretical analysis of the expected performance, with a particular emphasis on noise and dynamic range, and via simulations that compare the CS receiver against the performance expected from a conventional implementation. On the one hand, we show that CS-based systems that aim to reduce the number of acquired measurements are somewhat sensitive to signal noise, exhibiting a 3dB SNR loss per octave of subsampling which parallels the classic noise-folding phenomenon. On the other hand, we demonstrate that since they sample at a lower rate, CS-based systems can potentially attain a signi cantly larger dynamic range. Hence, we conclude that while a CS-based system has inherent limitations that do impose some restrictions on its potential applications, it also has attributes that make it highly desirable in a number of important practical settings.


Rich summarizes the key findings on his blog. Check it out.

Shouldn't We Call It "Sparse Sensing" or "Sparsity Sensing" instead ?

Here are four items that makes me think that, maybe, the issue of compression should not be the main concern, several evidences:

Sparse Spectral Factorization: Unicity and Reconstruction Algorithms by Yue Lu and Martin Vetterli

CS: Sparse Spectral Factorization for FROG ?

Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing by Yoav Shechtman, Yonina Eldar, Alexander Szameit, Mordechai Segev
Above a number of measurements comparable to the Nyquist rate (~25 measurements), the  reconstruction does not improve dramatically.  The problem of bandwidth extrapolation is not solved by adding more samples, since the physical cutoff frequency of the optical system obviously does not depend on this quantity.
CS: Multiplicative Noise Effect on the Donoho-Tanner Phase Transition (Part 2)


The wording "Sparse Sensing" has not been overused and the few first hits on the Google show application of SLAM in Robotics...

Tuesday, April 26, 2011

CS: Compressive Sensing for Array Processing Internship at MERL

Petros just sent me the following:

Igor, we have a new internship opening at MERL which we would like to fill very soon. I would appreciate it if you post it on Nuit Blanche.
Thanks!
Petros
Thanks Petros ! Here is the announcement:

Compressive Sensing for Array Processing.
MERL is seeking a highly motivated qualified intern to conduct research in the area of Compressive Processing for array processing applications. The ideal candidate would have background in Compressive Sensing, signals with finite rate of innovation, or similar sub-Nyquist signal acquisition techniques. Some experience with array processing is desirable. Internships last 3-6 months, depending on availability and qualification of the candidate, and typically result in one or more publications. For information and to apply, contact Petros Boufounos (petrosb@merl.com). Please include position ID: MM529 in the e-mail subject.

CS: Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing , Convex Approaches to Model Wavelet Sparsity Patterns

The first paper demonstrate the performance gains of a group penalty method compared to the lasso, while the other describes in detail an algorithm focused on finding a sparse solution to a system of quadratic equations for an superresolution imaging system. wow.



Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees(HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in deconvolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso.

We demonstrate that sub-wavelength optical images borne on partially-spatially-incoherent light can be recovered, from their far-field or from the blurred image, given the prior knowledge that the image is sparse, and only that. The reconstruction method relies on the recently demonstrated sparsity-based sub-wavelength imaging. However, for partially-spatially-incoherent light, the relation between the measurements and the image is quadratic, yielding non-convex measurement equations that do not conform to previously used techniques. Consequently, we demonstrate new algorithmic methodology, referred to as quadratic compressed sensing, which can be applied to a range of other problems involving information recovery from partial correlation measurements, including when the correlation function has local dependencies. Specifically for microscopy, this method can be readily extended to white light microscopes with the additional knowledge of the light source spectrum.

Monday, April 25, 2011

CS: Sparse Spectral Factorization for FROG ?


If you recall, when Rick Trebino has to determine the time width of the shortest time scale possible, he makes the case that an autocorrelation is not a good transform he can use to deconvolute that shortest time scale signal. He then goes on devising a hardware setup called FROG that allows him to perform this operation. A while ago, I asked the question as to whether FROG was not connected to compressed sensing in some sense ( Is FROG an instance of Nonlinear Compressed Sensing ? , FROG, Nonlinear Compressive Sensing ? continued.A small Q&A with Rick Trebino, the inventor of FROG. ). Well it looks like we have the beginning of an answer for the autocorrelation business, it looks like if the signal is sparse, then that operation seems to be valid. What about FROG ? here is the paper for the autocorrelation: Sparse Spectral Factorization: Unicity and Reconstruction Algorithms by Yue M. Lu and Martin Vetterli. The abstract reads:
Spectral factorization is a classical tool in signal processing and communications. It also plays a critical role in X-ray crystallography, in the context of phase retrieval. In this work, we study the problem of sparse spectral factorization, aiming to recover a one-dimensional sparse signal from its autocorrelation. We present a sufficient condition for the recovery to be unique, and propose an iterative algorithm that can obtain the original signal (up to a sign change, time-shift and time-reversal). Numerical simulations verify the effectiveness of the proposed algorithm

Sunday, April 24, 2011

Can One Hear the Shape of a Room ?


Here is another thought provoking item along the lines of the recent entry Can Compressive Sensing Help the Mapping of Brain Networks ? A while back, I made the case that in the audio business there was not probably a good reason as to why somebody would want to look at changing the recording hardware as you can already get some very high quality ones and even expensive ones. I also wondered what sort of niche market would enable a different sort of microphone that does not just record voice but does something else. Here is the beginning of an answer, a microphone that also does imaging. Here is the first attempt at answering that question: Can One Hear the Shape of a Room: The 2-D Polygonal Case by Ivan Dokmanic, Yue M. Lu and Martin Vetterli. The abstract reads:
We consider the problem of estimating room geometry from the acoustic room impulse response (RIR). Existing attacks on this problem exploit the knowledge of multiple RIRs. In contrast, we are interested in reconstructing the room geometry from a single RIR — a 1–D function of time. We discuss the unicity of the mapping between the geometry of a planar polygonal room and a single RIR. In addition to this theoretical analysis, we also propose an algorithm that performs the “blindfolded” room estimation. Furthermore, the derived results are used to construct an algorithm for localization in a known room using only a single RIR. Verification of the theoretical developments with numerical simulations is given before concluding the paper.

Can Compressive Sensing Help the Mapping of Brain Networks ?

I was told by some of you that you dreaded reading Nuit Blanche, because since all y'all are a competitive bunch, you were afraid to find the subject you were working on featured as part of somebody else's paper. Right then, with a simple read your three month's time investment is gone to waste. It's too bad. So instead of having a bunch of CS papers to read over the week-end, here is an open ended problem with open ended questions:



This video shows Ed Boyden explaining the technique currently investigated of the stimulation of brain cells through some sort of structured lighting. Here is the publication page of his lab. This is a fascinating subject at the cross road of technology, physics and biology together. The patient (right now they are at the animal trials) is injected with a biomarker that will produce some electric field when shone by light at some specified wavelength. However, since there are many places in the brain where one would want to have an (electrical) effect, a single fiber optic cannot really do the job. Ed  and his teams have devised a way for the delivery of multiple rays of light along the fiber[1].


I wonder how fine tuned their light multiplexing system is ? So here is my first open ended question: how can you improve the technology to get more than one structured lighting in ? Right now the configuration does not seem to allow for modulation. Lines of thoughts might include Opaque Lenses.

Once you have delivered all these lights beam throughout the brain, you also want to check the response of the brain. Ed and his team used fMRI to see how the cells eventually react in the brain.[2]


Other open ended questions include:

  • one structured lighting yields one response, if one allows for some light multiplexing, one could consider compressed sensing as a technique to be used but is the brain response linear ?
  • In the affirmative, are there good reasons to expect a sparse solution in this system of linear equations involving light inputs and MRI outputs ?
  • .....



Reference:
[1] Multiwaveguide implantable probe for light delivery to sets of distributed brain targets, Anthony N. Zorzos, Edward S. Boyden, and Clifton G. Fonstad
[2] Mapping Brain Networks in Awake Mice Using Combined Optical Neural Control and fMRI , Desai M., Kahn I., Knoblich U., Bernstein J., Atallah H., Yang A., Kopell, N., Buckner R.L., Graybiel A. M., Moore C. I., & Boyden E. S.

CS: Robust 1-Bit Compressive Sensing Demo

Woohoo!  Jason LaskaLaurent JacquesPetros Boufounos and Richard Baraniuk have just released a demo for the decoding of 1-bit compressed sensing measurements featured in Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors. For those of you interested in 1-bit compressed sensing, most entries on the subject are here. These entries include Buzz's punch. The demo is here:



Binary Iterative Hard Thresholding (BIHT) Demo
% This small matlab demo tests the Binary Iterative Hard Thresholding algorithm
% developed in:
%
% "Robust 1-bit CS via binary stable embeddings"
% L. Jacques, J. Laska, P. Boufounos, and R. Baraniuk
%
% More precisely, using paper notations, two versions of BIHT are tested
% here on sparse signal reconstruction:
%
% * the standard BIHT associated to the (LASSO like) minimization of
%
% min || [ y o A(u) ]_- ||_1 s.t. ||u||_0 \leq K (1)
%
% * the (less efficient) BIHT-L2 related to
%
% min || [ y o A(u) ]_- ||^2_2 s.t. ||u||_0 \leq K (2)
%
% where y = A(x) := sign(Phi*x) are the 1-bit CS measurements of a initial
% K-sparse signal x in R^N; Phi is a MxN Gaussian Random matrix of entries
% iid drawn as N(0,1); [s]_-, equals to s if s < 0 and 0 otherwise, is applied
% component wise on vectors; "o" is the Hadamard product such that
% (u o v)_i = u_i*v_i for two vectors u and v.
%
% Considering the (sub) gradient of the minimized energy in (1) and (2),
% BIHT is solved through the iteration:
%
% x^(n+1) = H_K( x^(n) - (1/M)*Phi'*(A(x^(n)) - y) )
%
% while BIHT-L2 is solved through:
%
% x^(n+1) = H_K( x^(n) - (Y*Phi)' * [(Y*Phi*x^(n))]_-) )
%
% with Y = diag(y), H_K(u) the K-term thresholding keeping the K
% highest amplitude of u and zeroing the others.
%
% Authors: J. Laska, L. Jacques, P. Boufounos, R. Baraniuk
% April, 2011


Credit photo: @Ryan_Hoke, Funnel over Starkville near the Mississippi State University campus.

CS: Noise Folding in Compressed Sensing, nonzero entry in the optimizer of L1 norm penalized least-square problem, Random lasso, Improved variable selection with Forward-Lasso adaptive shrinkage

With all the noise about noise, we were bound to have more noise :-) Today we have a new kind of noise featured in Noise Folding in Compressed Sensing by Ery Arias-Castro, Yonina Eldar. The abstract reads:
The literature on compressed sensing has focused almost entirely on settings where the signal is noiseless and the measurements are contaminated by noise. In practice, however, the signal itself is often subject to random noise prior to measurement. We briefly study this setting and show that, for the vast majority of measurement schemes employed in compressed sensing, the two models are equivalent with the important difference that the signal-to-noise ratio is divided by a factor proportional to p/n, where p is the dimension of the signal and n is the number of observations. Since p/n is often large, this leads to noise folding which can have a severe impact on the SNR.
The $\ell$-1 norm based optimization is widely used in signal processing, especially in recent compressed sensing theory. This paper studies the solution path of the $\ell$-1 norm penalized least-square problem, whose constrained form is known as Least Absolute Shrinkage and Selection Operator (LASSO). A solution path is the set of all the optimizers with respect to the evolution of the hyperparameter (Lagrange multiplier). The study of the solution path is of great significance in viewing and understanding the profile of the tradeoff between the approximation and regularization terms. If the solution path of a given problem is known, it can help us to find the optimal hyperparameter under a given criterion such as the Akaike Information Criterion. In this paper we present a sufficient condition on $\ell$-1 norm penalized least-square problem. Under this sufficient condition, the number of nonzero entries in the optimizer or solution vector increases monotonically when the hyperparameter decreases. We also generalize the result to the often used total variation case, where the $\ell$-1 norm is taken over the first order derivative of the solution vector. We prove that the proposed condition has intrinsic connections with the condition given by Donoho, et al \cite{Donoho08} and the positive cone condition by Efron {\it el al} \cite{Efron04}. However, the proposed condition does not need to assume the sparsity level of the signal as required by Donoho et al's condition, and is easier to verify than Efron, et al's positive cone condition when being used for practical applications.
Following up on yesterday's connection between statistics and CS, here are two Lasso papers: Random lasso by Sijian Wang, Bin Nan, Saharon Rosset, Ji Zhu. The abstract reads:
We propose a computationally intensive method, the random lasso method, for variable selection in linear models. The method consists of two major steps. In step 1, the lasso method is applied to many bootstrap samples, each using a set of randomly selected covariates. A measure of importance is yielded from this step for each covariate. In step 2, a similar procedure to the first step is implemented with the exception that for each bootstrap sample, a subset of covariates is randomly selected with unequal selection probabilities determined by the covariates' importance. Adaptive lasso may be used in the second step with weights determined by the importance measures. The final set of covariates and their coefficients are determined by averaging bootstrap results obtained from step 2. The proposed method alleviates some of the limitations of lasso, elastic-net and related methods noted especially in the context of microarray data analysis: it tends to remove highly correlated variables altogether or select them all, and maintains maximal flexibility in estimating their coefficients, particularly with different signs; the number of selected variables is no longer limited by the sample size; and the resulting prediction accuracy is competitive or superior compared to the alternatives. We illustrate the proposed method by extensive simulation studies. The proposed method is also applied to a Glioblastoma microarray data analysis.

Recently, considerable interest has focused on variable selection methods in regression situations where the number of predictors, $p$, is large relative to the number of observations, $n$. Two commonly applied variable selection approaches are the Lasso, which computes highly shrunk regression coefficients, and Forward Selection, which uses no shrinkage. We propose a new approach, "Forward-Lasso Adaptive SHrinkage" (FLASH), which includes the Lasso and Forward Selection as special cases, and can be used in both the linear regression and the Generalized Linear Model domains. As with the Lasso and Forward Selection, FLASH iteratively adds one variable to the model in a hierarchical fashion but, unlike these methods, at each step adjusts the level of shrinkage so as to optimize the selection of the next variable. We first present FLASH in the linear regression setting and show that it can be fitted using a variant of the computationally efficient LARS algorithm. Then, we extend FLASH to the GLM domain and demonstrate, through numerous simulations and real world data sets, as well as some theoretical analysis, that FLASH generally outperforms many competing approaches.

Image Credit: NASA/JPL/Space Science Institute
N00171130.jpg was taken on April 14, 2011 and received on Earth April 15, 2011. The camera was pointing toward DIONE at approximately 1,745,487 kilometers away, and the image was taken using the CL1 and CL2 filters

Wednesday, April 20, 2011

CS: A Rosetta table between Statistics/Machine Learning and Compressed Sensing. (Part deux)


As I mentioned to Olivier Grisel, maybe we ought to split the first column into two, one for statistics and the other one for machine learning ?

CS: L1 Minimization in Python, A Rosetta table between Statistics/Machine Learning and Compressed Sensing.

If you know any datasets that are relevant to some of the topics discussed here but are not well known, please let me know and I will feature them in the data section page of this blog. In the spirit of putting all my real time computational experiments in one place, I just created a place where they are all listed. It is here.

Mohammadreza Mahmudimanesh just sent me the following:

Hi Igor

I have recently developed a general purpose Python module for CS recovery for both complex and real bases/signals. It is just a direct implementation of l1 minimization algorithm using the opensource package CVXOPT. It is not a fast software, but can be used by CS learners or people who want to use l1-minimization within a scripting language like Python. It is also useful for people who cannot access MATLAB.

The code can be found on my homepage under Software section:


Regards

Thanks Mohammadreza! With regards to being fast, SL0 might do the trick and it should be easily implementable in Python.

Since LASSO solvers are potentially becoming important with respect to their ability to deal with multiplicative noise and since most of the literature on the subject is done in statistics, I drew this table of correspondence between terms and variable names generally used in Machine Learning/Statistics and Compressed Sensing so as to enable a faster reading of these papers. Let me know if I am missing a term.

so when they talk about p much larger than n or N, they really mean N much larger than m :-)

Tuesday, April 19, 2011

CS: Postdoctoral Fellow - Distributed Sparse Approximation in Cyber Physical Systems

Wen Hu sent me the following and I missed including it in the 1000th enntry, so it get its own entry. Thanks Wen. The announcement is also here and will be shortly on the CS jobs page:

Positions Details - 2011/183 - OCE Postdoctoral Fellow - Distributed Sparse Approximation in Cyber Physical Systems

Job Profile

Reference Number: 2011/183
Position Title: OCE Postdoctoral Fellow - Distributed Sparse Approximation in Cyber Physical Systems
Division: CSIRO ICT Centre
Location: Pullenvale, QLD
Classification: CSOF4
Salary Range: $73K - $80K per annum plus up to 15.4% superannuation
Tenure: 3 years
Applicants: International Applicants Welcome
Applications Close: 31 May 2011
Job Category:
Post Doctoral/Fellowship
Scientific Research

The position:

The OCE Postdoctoral Fellowship is a prestigious early career fellowship funded through the Office of the Chief Executive Science Team of CSIRO and is available for 3 years.

We are seeking a talented and dedicated recent or near-PhD graduate to be part of the rapidly growing research program being undertaken by the CSIRO ICT Centre in Pervasive Computing.

Specific duties will include:

Develop and evaluate novel, innovative solutions to key research problems in the area of compressive sensing/sparse approximations.
Conduct experiments to verify and demonstrate research outcomes.
Interact and collaborate with domain scientists and end users to solve cross-disciplinary problems and guide future research directions.
Write high quality journal and conference papers on key research outcomes.
Location: Brisbane, Australia

Salary: AU$73K - $80K per annum plus up to 15.4% superannuation
Ref No: 2011/183

To be successful you'll have:

A PhD (or near completion of) in computer science, electrical engineering, applied statistics/mathematics or relevant discipline.
Expertise in one or more areas of compressive sensing, sparse approximations, statistical analysis and distributed signal processing.
Demonstrated ability to program in MATLAB (or similar) for data analysis.
About CSIRO:

Australia is founding its future on science and innovation. Its national science agency, Commonwealth Scientific and Industrial Research Organisation (CSIRO), is a powerhouse of ideas, technologies and skills for building prosperity, growth, health and sustainability. It serves governments, industries, business and communities.

CSIRO ICT Centre is building a role for Australia as a global ICT innovator by delivering leading-edge Information and Communication Technology solutions for industry and society. The Centre has over 250 researchers across Australia working on a wide range of ICT technologies and application areas.

Aboriginal and Torres Strait Islanders are encouraged to apply for all CSIRO positions.

Position Description


The role of OCE Postdoctoral Fellow - Distributed Sparse Approximation in Cyber Physical Systems will involve the following key duties:

Develop and evaluate novel, innovative solutions to key research problems in the area of compressive sensing/sparse approximations.
Conduct experiments to verify and demonstrate research outcomes.
Interact and collaborate with domain scientists and end users to solve cross-disciplinary problems and guide future research directions.
Write high quality journal and conference papers on key research outcomes.
Prepare comprehensive and accurate documentation for patents and industrial reports.
Other duties as required.
Key capabilities:

The Postdoctoral Fellow we are recruiting will be based in the Autonomous Systems Laboratory, located in Brisbane, and the general research area is pervasive computing. Specific topics include managing the energy/information tradeoff (such as for localization and sensor sampling), in-network processing of information-rich data sources, such as audio and video sensors, data fusion and backend matching of sensed data to users’ interests. We are seeking a talented and highly motivated researcher with a track record in compressive sensing/sparse approximations (or related areas) along with a passion for developing innovative research solutions to real problems in a multi-disciplinary environment. The successful candidate will have skills in one or more of the following areas:

Distributed compressive sensing and sparse approximation
Context modeling and context-aware computing
Distributed signal processing, audio and video processing, data fusion
Statistical analysis of large scale datasets
You will become a valuable member of our research team, responsible for driving forward key areas of our growing research program.

Selection Criteria


**Note that to be eligible for consideration for appointment to a Postdoctoral fellowship within CSIRO an applicant’s total relevant (Postdoctoral Fellow and Research Scientist) work experience, on completion of the Postdoctoral fellowship at CSIRO, must not exceed six full-time equivalent years since confirmation of their doctorate.

Essential:

A PhD (or near completion of) in computer science, electrical engineering, applied statistics/mathematics or relevant discipline.
Expertise in one or more areas of compressive sensing, sparse approximations, statistical analysis and distributed signal processing.
Demonstrated ability to program in MATLAB (or similar) for data analysis.
Analytical skills and ability to solve complex conceptual problems through the application of scientific and engineering principles.
Excellent verbal and written communication skills.
Demonstrated ability to publish in leading scientific journals or conferences.
Ability to foster positive working relationships and contribute effectively as part of a multi-disciplinary team.
Sound judgment and the ability to act independently.
Desirable:

Experience with languages such as Python or C.
Knowledges at computer vision and tracking DSP application domains.
CSIRO is a values based organisation. In your application and at interview you will need to demonstrate behaviours aligned to our values of:

Integrity of excellent science
Building trust and respect each day
Initiative to explore new horizons
Delivering on commitments
Safety and sustainability

How to Apply:

You are required to include two (2) documents:
(1) a document addressing the Selection Criteria; and
(2) a "Resume or CV" including the names of at least two professional referees.

To address the Selection Criteria please write a paragraph or two using each selection criterion as a heading and outline how you are able to meet that criteria. Applicants who do not address the selection criteria may not be considered. For assistance when preparing your application please read the information available at "Guidelines for Applicants"

Applications:

Before you apply please ensure that your documents are in Text, MS Word or PDF. Please also ensure your file is not larger than 1MB in PDF format, or 2MB for all other formats. Your Documents will be converted into PDF format. To view these documents once converted you will need to download Adobe Reader Download Adobe Reader

CSIRO prefers applications be lodged online via this careers site.

If you experience difficulties applying online call 1300 301 509 and someone will be able to assist you. Outside business hours please email: csiro-careers@csiro.au.

If you are unable to lodge your application online you can send your completed application and copies of any supporting documentation quoting reference number 2011/183 to:

CSIRO Careers Online
PO Box 225
DICKSON ACT 2602

Contact: If after reading the selection documentation you require further information please contact Dr Wen Hu via email Wen.Hu@csiro.au or phone +61 7 3327 4043.

Please do not email your application directly to Wen. Applications received via this method will not be considered. You must use the 'Apply Now' link, or an alternative method of submission as outlined above.

CSIRO Vision, Purpose & Beliefs

CSIRO's purpose is to deliver great science and innovative solutions for industry, society and the environment by igniting the creative spirit of our people. People are at the centre of everything we do. We work to create the right environment to amplify our talent. It is not enough just to have a great idea; we must have impact, solve problems and make a difference. We take a triple bottom line focus in our activities, balancing between commerce and the public good. Great science is our foundation. Getting it out there is our aim.

CSIRO ICT Centre is building a role for Australia as a global ICT innovator by delivering leading-edge Information and Communication Technology solutions for industry and society. The Centre has over 250 researchers across Australia working on a wide range of ICT technologies and application areas. The Centre forms the hub of the cutting-edge ICT research undertaken by the CSIRO.

The ICT Centre is currently playing a leading role in CSIRO’s creation of a large, cross-cutting program into wireless sensor networks and pervasive computing with approximately 40 staff which is tackling system components including sensor transduction, energy harvesting, wireless communications, privacy and security, programming, querying, maintenance, planning, web service integration and data management. For further information on the CSIRO ICT Centre please visit www.ict.csiro.au

Five continents

Muahahahah, world domination at last! I was wondering when that would happen, Nuit Blanche is being read on five continents at the same time. Thank you all, you made my day. 

CS: Replica Method for Sparse Representation of White Gaussian Noise slides, Spherical polar Fourier EAP and ODF reconstruction Via Compressed Sensing in Diffusion MRI and a studentship




In diffusion magnetic resonance imaging (dMRI), the Ensemble Average Propagator (EAP), also known as the propagator, describes completely the water molecule diffusion in the brain white matter without any prior knowledge about the tissue shape. In this paper, we describe a new and efficient method to accurately reconstruct the EAP in terms of the Spherical Polar Fourier (SPF) basis from very few diffusion weighted magnetic resonance images (DW-MRI). This approach nicely exploits the duality between SPF and a closely related basis in which one can respectively represent the EAP and the diffusion signal using the same coefficients, and efficiently combines it to the recent acquisition and reconstruction technique called Compressed Sensing (CS). Our work provides an efficient analytical solution to estimate, from few measurements, the diffusion propagator at any radius. We also provide a new analytical solution to extract an important feature characterising the tissue microstructure: the Orientation Distribution Function (ODF). We illustrate and prove the effectiveness of our method in reconstructing the propagator and the ODF on both noisy multiple q-shell synthetic and phantom data.

and here is an abstract: Predicting Catastrophes in Nonlinear Dynamical Systems by Compressive Sensing by Wen-Xu Wang, Rui Yang, Ying-Cheng Lai, Vassilios Kovanis, and Celso Grebogi. The abstract reads:
An extremely challenging problem of significant interest is to predict catastrophes in advance of their occurrences. We present a general approach to predicting catastrophes in nonlinear dynamical systems under the assumption that the system equations are completely unknown and only time series reflecting the evolution of the dynamical variables of the system are available. Our idea is to expand the vector field or map of the underlying system into a suitable function series and then to use the compressive-sensing technique to accurately estimate the various terms in the expansion. Examples using paradigmatic chaotic systems are provided to demonstrate our idea.
Finally, here is s PhD studentship with Toshiba and University of Bristol:

DHPA PhD Vacancy Compressive Sensing

Industrial DHPA PhD Studentship Vacancy

An Industrial PhD Studentship funded by the ESPRC through the 2011 Dorothy Hodgkin Postgraduate Award Scheme (DHPA) is available in the Centre for Communications Research (CCR) at the University of Bristol in collaboration with Toshiba Research Europe Ltd (TREL).

The project is entitled “Compressive sensing for M2M applications including healthcare and smart energy”. Compressive sensing is the exploitation of certain characteristics of a signal such that less sampled data (less than Nyquist rate) can be used to reconstruct the signal. In the context of this proposal, compressive sensing is especially useful in reducing the required amount of sensor data to be acquired, such that these can be wirelessly transmitted at a low rate. As the data cannot be reconstructed in the normal way, it will also provide a means of protecting the privacy of the user. In this project, the student will look at ways of designing compressive sensing algorithm to exploit sparsity in M2M data such as biomedical signals or smart energy sensor data. However, the design will have to take special care such that irregularity in the signals which could be vital, will not be lost in the sensing process. Additionally, the student will look at algorithms to best reconstruct the compressed data.
In a nutshell, the project considers the following problems:
• Compressive sensing of M2M sensor data to achieve low rate sampling while capturing all vital irregularities in the signals,
• Design compressive sensing algorithms which are able to protect M2M sensor data from eavesdroppers or wireless sniffers,
• Optimising the reconstruction of compressed M2M data to remove noise and recover the original signal,
• Study implications of applying compressed sensing in a practical M2M environment such as healthcare and smart energy networks.

During the PhD programme, the student may undertake a placement at TREL’s Telecommunications Research Laboratory based in Bristol; while the student would benefit from the interactions with industrial researchers as well as the experience of working in an industrial laboratory, this is not mandated and is open to discussion.

This studentship is available from October 2011 for a period of up to 4 years and is funded jointly by the EPSRC through the 2011 Dorothy Hodgkin Postgraduate Award Scheme (DHPA) and by Toshiba Research Europe Ltd. The award provides an annual stipend ca. £13,590 per annum plus an enhancement of £1,430 per annum and full payment of Overseas Tuition Fees. Due to funding restrictions this studentship is only open to students from the countries listed at the OECD website - http://www.oecd.org/dataoecd/62/48/41655745.pdf - plus Russia and Hong Kong.

We are seeking students with equivalent of a UK first class honours degree, from a reputable academic institution. We welcome candidates who have strong backgrounds in one of the following areas: electronics and electrical engineering, computer science, applied mathematics, system control, or related disciplines

The project will be supervised by Dr. Rob Piechocki of CCR, and Drs Woon Hau Chin and Zhong Fan at Toshiba Research Europe Ltd. Further details of the CCR’s work are at: www.bristol.ac.uk/ccr.

Further details of the studentship can be found at: 
http://www.rcuk.ac.uk/ResearchCareers/dhpa
Please e-mail your application to 
http://www.bristol.ac.uk/prospectus/postgraduate/2011/intro/apply.html and use this same address if you have questions or would like to have an informal discussion about the post and the research project.

The start date is as soon as possible after the 1 October 2011 and no later than 1 October 2012. This studentship is open until filled.

Monday, April 18, 2011

CS: The 1000th entry, Robust 1-Bit Compressive Sensing, Compressive Sampling of Multiple Sparse Signals Using Finite Rate of Innovation Principles, Sparse Spectral Factorization, PRISM: A Divide-and-Conquer Low-Rank and Sparse Decomposition Model for Dynamic MRI

Today's blog entry is special edition in many respects. It gets most of its information from a rich and wide stream of sources (conversations on e-mails, twitter, webpages featuring papers, courses, jobs). It is also the 1000th entry on compressive sensing on Nuit Blanche. This week-end, the number of people reading this blog through an RSS Feed crossed the 1100 threshold while the LinkedIn Compressive Sensing Group now gathers 830 members.

Following up on the recent entries about the "compressibility" of the realizations of the Gaussian noise (see also here), Ori Shental who started me into writing about this with his paper, sent me the following:
Dear Igor, 
Following your recent post about the WGN simulation:
Please note that according to Claim #3, stating the marginal achieving distribution (and its illustration in Fig. 2), the SPARSEST representation of WGN (i.e., on the threshold curve itself) consists of *infinite* spikes (namely uniform distribution in the +/- infinity) for the non-zero entries. According to the derived marginal distribution of the non-zero entries of the sparse WGN representation there is a *gap*, or a dead-zone, in the values taken by those non-zero entries, which increases to infinite as you get closer to the threshold curve (please look at the example Gaussian tails on Figure 2).
Hence in the limit of large systems, the sparsest representation is composed of infinite values, a picture which is difficult to reproduce in practical simulation. However Claim 3, gives us a hint on how to threshold the simulation results in order to grasp the entire "compressibility" (i.e. get the full "belly" beyond the trivial \alpha=\kappa line): According to eq. (23) any non-zero value *below* (in absolute value) the standard deviation (multiplied by some auxiliary scalar \xi) of the Gaussian, described also in eq. (23), should be discarded and set to 0. This STD can be inferred from the histogram of the results obtained by SL0, IRLS or any other approximation to L0. This thresholding should allow us to imitate the workings in the limit of large systems, based on finite-size simulation data.
Cheers,
Ori

Thanks Ori !

I also got an Email from Greg Charvat. If you remember him, he is the one who gets his students to write a blog about how their class assignment on SARs radar using aluminum (pringle?) cans.

Hi Igor,

Thought your readers might find this interesting:

I am collaborating with Prof. Staelin at MIT to offer a 1-week MIT Professional Education course on radar systems, where each student actually builds their own radar set then takes it into the field to perform experiments, including SAR imaging of urban terrain. The course is open to any professional in the field, details here:

This course will be based on our IAP radar course taught at MIT this past January:

Cheers,
Greg

PS. I am collaborating with Prof. Mark Richards from Georgia Tech to post some of my rail SAR phase data on the web so that students can use it. Mark tells me it is tough to get raw data, or pretty much impossible without lots of restrictions. So i figured it would be good to let the compressed sensing community know this when it happens because you guys might want to play around with actual radar phase data. Hope this helps.
Thanks Greg !

Dick Gordon just wrote an entry on Breast Cancer Wars: Delaying The TKO? while Zhilin Zhang made a comparison and wrote about it in his blog on two reconstruction algorithms in Is MMV more suitable for dynamic environment (time-varying sparsity) than KF-CS and LS-CS?

Of note, Rich Baraniuk has switched his webpage into a blog. One of his recent entry is an abstract of a recent Nature article entitled: Data Deluge in Science. The positions for his lab are here.

Even thought it is also on Rich's blog, Laurent Jacques was the first to send something about this new 1-bit sensing paper entitled: Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors by Laurent Jacques, Jason N. Laska, Petros T. Boufounos, and Richard Baraniuk . The abstract reads:
The Compressive Sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that a large class of measurement mappings achieve this optimal bound. Second, we consider reconstruction robustness to measurement errors and noise and introduce the Binary -Stable Embedding (B SE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.
From an e-mail conversation I had with LaurentJason, and Rich and Gabriel Peyre, it looks like a demo will shortly be available on the interwebs. thanks guys, I know somebody who's looking at the Earth Hum who wants to test this thing.

Through Mark Davenport's twitter stream, I learned about an extensive course entitled "An Introduction to Compressive Sensing" written by Richard Baraniuk, Mark Davenport, Marco Duarte, Chinmay Hegde  now available from cnx.org. wow, I'll add it shortly to both the big picture and the Teaching Compressive Sensing page.

Finally, I found the following papers on the interwebs:

In sensor networks and communication systems, one is often confronted with sampling multiple sparse signals having a common support set. Multipath channels in a multiple-input multiple-output (MIMO) wireless communication setting is an interesting example where one generally needs to perform channel estimation for each transmit-receive antenna pair. MIMO multipath channels are usually (approximately) sparse and satisfy the common-support property whenever the distances between the antennas are small compared to the distance the electromagnetic wave can travel in the time corresponding to the inverse bandwidth of the communication system. This assumption is satisfied by small and medium bandwidth communication systems like OFDM and CDMA. This leads us to extend the finite rate of innovation sampling and reconstruction scheme to the sparse common-support scenario (SCS-FRI), in which input signals contain Diracs with common locations but arbitrary weights. The goal is to efficiently reconstruct the input signals from a set of uniform samples, making use of the common-support property to improve robustness. We first find the best theoretical performance for the SCS-FRI setup by deriving the Cram´er-Rao lower bound. Our results show that for a set of well-separated Diracs, it is the total energy of the Diracs at each common position which determines the bound. We then propose a multi-channel reconstruction algorithm and compare its performance with the Cram´er-Rao lower bound. Numerical results clearly demonstrate the effectiveness of our proposed sampling and reconstruction scheme in low SNR regimes

SPARSE SPECTRAL FACTORIZATION: UNICITY AND RECONSTRUCTION ALGORITHMS by Yue M. Lu and Martin Vetterli. The abstract reads:
Spectral factorization is a classical tool in signal processing and communications. It also plays a critical role in X-ray crystallography, in the context of phase retrieval. In this work, we study the problem of sparse spectral factorization, aiming to recover a one-dimensional sparse signal from its autocorrelation. We present a sufficient condition for the recovery to be unique, and propose an iterative algorithm that can obtain the original signal (up to a sign change, time-shift and time-reversal). Numerical simulations verify the effectiveness of the proposed algorithm.

ESTIMATING SPARSE MIMO CHANNELS HAVING COMMON SUPPORT by Yann BarbotinAli HormatiSundeep RanganMartin Vetterli. The abstract reads:
We propose an algorithm (SCS-FRI) to estimate multipath channels with Sparse Common Support (SCS) based on Finite Rate of Innovation (FRI) sampling. In this setup, theoretical lower-bounds are derived, and simulation in a Rayleigh fading environment shows that SCS-FRI gets very close to these bounds. We show how to apply SCS-FRI to OFDM and CDMA downlinks. Recovery of a sparse common support is, among other, especially relevant for channel estimation in a multiple output system or beam-forming from multiple input. The present algorithm is based on a multi-output extension of the Cadzow denoising/annihilating filter method [1, 2].

SAMPLING OF SPARSE CHANNELS WITH COMMON SUPPORT  by Yann BarbotinAli HormatiSundeep RanganMartin Vetterli. The abstract reads:
The present paper proposes and studies an algorithm to estimate channels with a sparse common support (SCS). It is a generalization of the classical sampling of signals with Finite Rate of Innovation (FRI) [1] and thus called SCS-FRI. It is applicable to OFDM and WalshHadamard coded (CDMA downlink) communications since SCS-FRI is shown to work not only on contiguous DFT pilots but also uniformly scattered ones. The support estimation performances compare favorably to theoretical lower-bounds, and importantly this translates into a substantial equalization gain at the receiver compared to the widely used spectrum lowpass interpolation method.

PRISM: A Divide-and-Conquer Low-Rank and Sparse Decomposition Model for Dynamic MRI by Hao Gao, Yuting Lin, Chang Beom Ahn, and Orhan Nalcioglu. The abstract reads:
Image frames from dynamic MRI can be abstracted as the superposition of the background component, which is temporally coherent, and the motion component, which is spatially sparse, up to the proper basis. Inspired by their distinct characterizations, we introduce the divide-and-conquer spatiotemporal matrix decomposition approach, namely Prior Rank, Intensity and Sparsity Model (PRISM). Here the temporal coherence of the background component is enforced by the rank regularization and the spatial sparsity of the motion component is promoted by the sparsity regularization in framelets. The validation with both synthetic and experimental cardiac data and the comparison with the state-of-art reconstruction methods show that PRISM is feasible for maintaining the reconstruction quality with much fewer samples. The consequent reduction of data acquisition time can relax the breath-holding constraint with possibly less respiratory motion artifact and more regular heart rate in cardiac cine imaging, and enable the imaging of patients with faster heart rate in real-time cardiac cine imaging.

Image Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington, Mercury, as Seen in High Resolution As the MESSENGER spacecraft sped over Mercury's north polar region, the NAC captured this image in very high resolution. This area is located north of Hokusai.