Sunday, October 31, 2010

Of M&M's, Gravitational Lenses, the Hubble and the Last Space Shuttle Mission.

In The Accidental Astrophysicists, Dick Lipton reminded us how a result found in math put an end to a conjecture devised in astrophysics. It is a very compelling story that involves non other than a serendipitous Math department's copier. The result bears on gravitational lenses which happened to be discovered using the WFPC2 Camera on-board the Hubble telescope (before WFPC2 was installed, we had many blurred images of the sky). It so happens that this camera was installed by the crew of the Servicing Mission 1 on STS-61 in 1993 which included Tom Akers and Katy Thornton. Both were flying with us on the KC-135 a few months before that flight, and I believe we shared M&Ms on that flight..

During breaks, we would throw M&Ms at each other for fun. In restrospect,  I can only imagine if one of those M&Ms had flown right into the eye ball of one of these two; the mission would have been delayed and then....we never know. So yes, I feel connected to gravitational lenses in a unique way that involves M&Ms. More seriously, in two days, barring any additional funding, the space Shuttle will fly for the last time. The last servicing mission for the Hubble took place a year ago. The Hubble will never be re-serviced again and will eventually crash in the ocean.
Credit: NASA, HST, ESA

Friday, October 29, 2010

CS: CS-ECG, ParallelCamp, Une Competition, UPS Delivers Optimal Phase Diagram in High Dimensional Variable Selection, Diffuse Optical Tomography, Exploiting Statistical Dependencies

I was at another meeting last week, the Parallel Camp  a specific instance of a Barcamp, a networking event of sorts on a specific subject. Here the meeting was about parallelism and mostly GPUs. To get some conversation going, I made this presentation entitled: Parallel Computing for Compressive Sensing: What is it Good For ?. The public was pretty sparse but focused as I was competing with a much larger session. One of the fascinating things was that I kept on talking about sensors and how CS could be affecting them and how,  I  believe, parallel  computing fitted right into this new paradigm. Yet some of the people in the audience eventually asked about what a sensor was. My take from this is that sensors are always taken for granted: i.e. nobody wants to be Superman and have X-ray vision  anymore:-) because everybody is convinced that people working for CSI can do anything. On the other hand, the power level of GPUs sure makes the issue of reducing power at the sensor level a little quaint issue. Of note, Eric Mahé of MassiveRand, one of the organizers of the meeting, pitched a GPU based solution that spits out random numbers that can never be reproduced. Other discussions included some of the folks involved in Finance and the need to get good Random Numbers out of current technology (an issue Christian Robert just talked about recently on his blog). Most of the answers on the subject were pretty much summarized in the comment of that blog entry. Also I did not realize this but a GPU can currently spit out 6 Gbit of random numbers per second for a memory on the card reaching 6 GB. The PCI Express link between the CPU and the GPU seems to really be the main bottleneck as far as I could tell. The viewpoint of Compressive Sensing tells me that in order to use GPUs at their full extent, maybe we ought to be looking at how the 6Gb/s of RNG can be used on the GPU or on peripherals like the CRT/LCDs that have direct access to this flood of data. On a different note, you may notice in the presentation one of my new subject of interest...

Emily Allstot sent me the following:

I am an undergraduate Electrical Engineering student at the University of Washington, doing research related involving compressed sensing. I also have been a fan of your blog for a while.

For once, I actually have some interesting things to share!

Our department has invited Emmanuel Candes to give a colloquium on December 7, 2010, and anyone in the Seattle area who is interested can come. Videos will also be posted afterward, at the link.

Also, next week is the BioCAS 2010 conference in Paphos, Cyprus. I will be presenting my very first publication during the BioSignal Processing lecture session. The paper is titled, "Compressive Sampling of ECG Bio-Signals: Quantization Noise and Sparsity Considerations" by Emily Allstot, Andrew Y. Chen, Anna M.R. Dixon, Daibashish Gangopadhyay, and David J. Allstot (Dept. of Electrical Engineering, Univ. of Washington). This paper is part of the groundwork for integrated circuit implementations of CS that are being designed by the PhD students in our group.

This is the abstract:

"Compressed sensing (CS) is an emerging signal processing paradigm that enables the sub-Nyquist processing of sparse signals; i.e., signals with significant redundancy. Electrocardiogram (ECG) signals show significant time-domain sparsity that can be exploited using CS techniques to reduce energy consumption in an adaptive data acquisition scheme. A measurement matrix of random values is central to CS computation. Signal-to-quantization noise ratio (SQNR) results with ECG signals show that 5- and 6-bit Gaussian random coefficients are sufficient for compression factors up to 6X and from 8X-16X, respectively, whereas 6-bit uniform random coefficients are needed for 2X-16X compression ratios. "
Thanks Emily. I look forward to see how some of these encodings (quantized Gaussian random) perform compared to the expander graph approach of the EPFL folks.

In a different direction,  Matthijs den Besten is organizing a new "Compressive Sesnsing" competition not unlike that the one Matlab ran a while back. It's in French. and called "Le grand concours de programmation MATLAB". There are prizes associated with winning the big prize or any of the intermediary prizes. Bonne Chance!

Here is a new paper investigating the Donoho-Tanner phase transition from a statistics viewpoint with a new algorithm: UPS Delivers Optimal Phase Diagram in High Dimensional Variable Selection by Pengsheng Ji, Jiashun Jin. The abstract reads:
Consider linear regression in the so-called regime of p much larger than n. We propose the UPS as a new variable selection method. This is a Screen and Clean procedure [Wasserman and Roeder (2009)], in which we screen with the Univariate thresholding, and clean with the Penalized MLE. In many situations, the UPS possesses two important properties: Sure Screening and Separable After Screening (SAS). These properties enable us to reduce the original regression problem to many small-size regression problems that can be fitted separately. We measure the performance of variable selection procedure by the Hamming distance. In many situations, we find that the UPS achieves the optimal rate of convergence, and also yields an optimal partition of the so-called phase diagram. In the two-dimensional phase space calibrated by the signal sparsity and signal strength, there is a three-phase diagram shared by many choices of design matrices. In the first phase, it is possible to recover all signals. In the second phase, exact recovery is impossible, but it is possible to recover most of the signals. In the third phase, successful variable selection is impossible. The UPS is able to recover most of the signals as long as successful variable selection is possible. The lasso and the subset selection are well-known approaches to variable selection. However, somewhat surprisingly, there are regions in the phase space where neither of them is rate optimal, even in very simple cases. The lasso is non-optimal for it is too loose on false positives, and the subset selection is non-optimal for it is too harsh on true positive clusters.
The figures are using a different set of variables than the ones we are accustomed to in Donoho-Tanner phase transition, but I found the following Rosetta stone description enlightening (yes statisticians speak Greek to me, not that there is anything wrong with speaking Greek!). From the text:
....The lasso and the subset selection (also known as the L1- and L0-penalization methods, respectively) are well-known approaches to variable selection. However, somewhat surprisingly, there are regions in the phase space where neither the lasso nor the subset selection is rate optimal, even for very simple. The lasso is non-optimal because it is too loose in filtering out fake signals (i.e. noise that is highly correlated with a signal), and the subset selection is non-optimal because it tends to kill one or more signals when signals appear in pairs, triplets, etc..
Similarly, Sergey points me to this arxiv preprint which made a passing reference to CS: Theory and Applications of Robust Optimization by Dimitris Bertsimas, David B. Brown, Constantine Caramanis. The abstract reads:
In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Reading the paper yields to this other paper I had mentioned back in April:
which makes a statement about Robust Linear Regression which in our world translates into multiplicative noise. More Rosetta Stone moments....In the meatime, you might also be interested in the NIPS 2010 Workshop, entitled Robust Statistical learning (robustml):

Checking the Google and Arixv, I finally got the following two papers:

Diffuse optical tomography (DOT) allows tomographic (3D), non-invasive reconstructions of tissue optical properties for biomedical applications. Severe under-sampling is a common problem in DOT which leads to image artifacts. A large number of measurements is needed in order to minimize these artifacts. In this work, we introduce a compressed sensing (CS) framework for DOT which enables improved reconstructions with under-sampled data. The CS framework uses a sparsifying basis, ℓ1-regularization and random sampling to reduce the number of measurements that are needed to achieve a certain accuracy. We demonstrate the utility of the CS framework using numerical simulations. The CS results show improved DOT results in comparison to “traditional” linear reconstruction methods based on singular-value decomposition (SVD) with ℓ2-regularization and with regular and random sampling. Furthermore, CS is shown to be more robust against the reduction of measurements in comparison to the other methods. Potential benefits and shortcomings of the CS approach in the context of DOT are discussed.

Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we consider a Bayesian setting and go beyond the classic assumption of independence between the atoms. The main goal of this paper is to introduce a statistical model that takes such dependencies into account and show how this model can be used for sparse signal recovery. We follow the suggestion of two recent works and assume that the sparsity pattern is modeled by a Boltzmann machine, a commonly used graphical model. We show that for general dependency models, exact MAP estimation of the sparse representation becomes computationally complex. To simplify the computations, we propose a greedy approximation for the MAP estimator. We then consider a special case where exact MAP is feasible, by assuming that the dictionary is unitary and the dependency model corresponds to a certain sparse graph. Exploiting this structure, we develop an efficient message-passing algorithm that recovers the underlying signal. The effectiveness of our developed pursuit methods is demonstrated on synthetic signals, where we compare the denoising performance to that of previous recovery methods that do not exploit the statistical dependencies. Finally, when the model parameters defining the underlying graph are unknown, we suggest an algorithm that learns these parameters directly from the data, leading to an iterative scheme for adaptive sparse signal recovery.

Thursday, October 28, 2010

CS: A video, some stats, An old CS system ? Very High Resolution SAR Tomography via Compressive Sensing, Around the blogs in 80 hours,a PhD position and more.

While using the Google, I was just reminded of the following video of Emmanuel Candes' presentation in EE380 at Stanford. I like the first slide.

Blogger, the blogging platform of this site, now provides public stats on pageviews. Since July 1 st, 2010, we've had more than 100,000 pageviews. It looks staggering at first sight....

...and it is still staggering at second sight :-). If you come to the site, you'll also notice the posts that have been most popular in the past seven days.

Add to that, more than 50,000 views of entries as read through the Feedreaders. I may expand on that later but the type of subjects and entries most viewed on the blog and through the feed are markedly different.

Mahesh Shastry, one of the author of one of the first paper using the Donoho-Tanner transition for checking on a potential hardware set-up, just sent me the following:
I am a grad student at Penn State. This email concerns an historic fact that has possible connections to compressive sensing, that I was recently made aware of. My thesis adviser (Ram M. Narayanan) pointed me to a patent (by Alfred A. Wolf) from 1974 which proposes using a type of random projections for speech signal processing. Please find attached. It makes interesting read!
It is here: A speech compression Alfred A. Wolf. The encoding scheme is shown here:

Whereas the reconstruction schematics can be found in patent 3,746,791 by the same author in Speech Synthesizer Utilizing White Noise

Indeed it looks similar, at the very least it looks similar to the 'crazy' random filters in  Random Filters For Compressive Sampling And Reconstruction. However, here are several differences:
  • the scheme described in the patent does not assume sparsity of the signal (maybe the signal is sparse by default for it to work, so this is not an exclusion property).
  • the complexity of the reconstruction does not seem high (which doesn't preclude it from being part of the club), but more importantly
  • the random noise that is encoding the signal does not seem to be stored for use in the decoding stage..
And that, my friend, is a killer... I think. Because of this last  feature of the system I would not call  this system an instance of compressive sensing, and you ? You can answer the poll below.

Thanks Mahesh .

While doing a search I found the following presentation by Zhu Xiaoxiang on Very High Resolution SAR Tomography via Compressive Sensing.

In the past weeks, here are some of the blog entries that got my interest:
Laurent Jacques sent me the following:

Opening of a PhD position:

"Optical Inverse Problem Solving in 3-D Deflectometry"


(starting date: January 1st, 2011, application deadline: November 19th, 2010. )


   * Dr Laurent Jacques (ICTEAM/ELEN, UCL)
   * Prof. Philippe Antoine (IMCN/NAPS, UCL)
   * Prof. Benoît Macq (ICTEAM/ELEN, UCL)

Optical Framework:

Improved functional performance is a general trend in ocular surgery today. As an example, multifocal intraocular lens (IOL) achieves different optical powers, as such to enable good near and distant vision. There are two types of multifocal lenses. A refractive multifocal lens is made of concentric rings whose refractive powers alternate from centre to periphery. Diffractive multifocal lens uses light diffraction at an interference grid made of micrometric steps. Such complex surfaces are a real challenge both for manufacturing and for characterization.

This position opening takes place in a 3-year regional project (DETROIT), funded by the Belgian Walloon Region. This project aims at characterizing surfaces by optical deflectometry. The principle is to measure the deviation of the light reflected by each point of the surface. This technique is an interesting alternative to interferometry in order to estimate the surface topography. Indeed measuring the deviation angle instead of the height has several advantages. It is insensitive to vibrations as it is not based on interferences. It is more effective in detecting local details and object contours than height measurement. In deflectometry, the shape of an object is numerically reconstructed from the gradient data with a high accuracy. As an example, 10nm flatness deviation over a 50mm window glass can be observed with high accuracy instrument.

Experimentally, the very short radius of curvature of the IOLs requires the use of wide acceptance optics as such to collect light that is reflected in a large range of angles. The drawback is the very narrow field of view. In order to reduce the acquisition time, a device that images the whole lens shall be preferred but inevitable distortion of the image will be numerically corrected based on the knowledge of instrument response. This solution is challenging but very attractive for industrial perspectives.

Numerical Methods:

Nowadays, assuming that a signal (e.g., a 1-D signal, an image or a volume of data) has a sparse representation, namely that this signal is linearly described with few elements taken in a suitable basis, is an ubiquitous hypothesis validated in many different scientific domains. Interestingly, this sparsity assumption is the heart of methods solving inverse problems, namely those estimating a signal from some linear distorting observations. Sparsity stabilizes (or regularizes) these signal estimation techniques often based on L1-norm (or Total Variation norm) minimization and greedy methods.

This PhD position concerns the application of the sparsity principle for modeling and solving the optical inverse problem described in the previous section, that is, the reconstruction of the undistorted image of the IOL from the experimental measurements. This research will be carried out in collaboration with a postdoctoral researcher hired for one year on the same project.

Job description:

The student will develop mathematical methods and algorithms for reconstructing the undistorted IOL image from the experimental measurements, taking into account (and modeling) the particularities of the sensing systems. Calibration of the response of the instrument will be carried out by another partner of the project. However, a close collaboration between the two teams is necessary. Moreover, the PhD student will be co-supervised by a postdoctoral researcher hired on the same topic and funded by the same project.

The optical development takes place in the 3-year project DETROIT. It involves two industrial partners, Physiol and Lambda-X and 3 university partners. Physiol is well-known for its development of IOL. Lambda-X has a large experience in optical characterization of optical components by means of deflectometry. The leading academic partner is the Atomic, Molecular and Optical Physics Laboratory (IMCN/NAPS) of University of Louvain (UCL, Louvain-la-Neuve, Belgium), helped by two other Belgian university partners: the Active Structures Laboratory of the University of Brussels (ASL, ULB), in charge of the development of fast adaptive optics, and the Communications and Remote Sensing laboratory TELE (ICTEAM/ELEN, UCL) which is responsible of the IOL image reconstruction and post-processing algorithms.

Research activity will be carried out in the TELE Laboratory, and partly at PAMO and at Lambda-X offices in Nivelles, Belgium.

Candidate's Profile:

* M.Sc. in Applied Mathematics, Physics, Electrical Engineering, or Computer Science;
* Knowledge (even partial) in the following topics constitutes assets:
 o Convex Optimization methods,
 o Signal/Image Processing,
 o Classical Optics,
 o Compressed Sensing and inverse problems.
* Experience with Matlab, C and/or C++.
* Good communications skills, both written and oral;
* Speaking fluently in English or French is required. Writing in English is mandatory.

We offer:

* A research position in a dynamic and advanced high-tech environment, working on leading-edge technologies and having many international contacts.

* Funding for 2-3 years, with the possibility to extend it by applying for a Belgian NSF grant.


Applications should include a detailed resume, copy of grade sheets for B.Sc. and M.Sc. Names and complete addresses of referees are welcome.

Please send applications by email to:

Questions about the subject or the position should be addressed to the same email addresses.

Finally, while most imagers pay specific attention to getting the right light into lenses and then onto the imaging dye, you always are dependent on stray light. Here is an Edmund Optics entry on the subject for reducing stray light. I look at it from a different point of view, maybe painting the edge of the lens other than black provide additional "scrambled" information about the scene. I wonder how that could be used for some compressive sensing work (like the weak compressed sensing concept). By the way if you think that stray light is just a problem we have when designing star trackers, think again, it looks like this is the reason you will not see a white iPhone anytime soon.

Wednesday, October 27, 2010

When Modeling Reality Is Not An Option (The Robust Mathematical Modeling Blog)

Some of you know this but others may not. I sometimes blog about robust mathematical modeling (when modeling reality is not an option is the motto). The reason I say this is because I just wrote an entry there about a workshop I attended last week on issues related to modeling and its pitfalls. One of the issue discussed (the MOX talk) used to be an issue we struggled with in the Excess-Weapons Plutonium disposition program back in the late nineties.  The entry  SCM Talks: Electricity Production Management and the MOX Computational Chain features the fascinating problematic of planning electricity production for France and the computational difficulties stemming from performing experiments and computational calibration exercises outside the real region of operation of a nuclear reactor.  

Tuesday, October 26, 2010

CS: Recovering Compressively Sampled Signals Using Partial Support Information, MDL Sparse coding and dictionary learning, Robust PCA via Outlier Pursuit and more

Today we have a series of papers from arxiv and more:

In this paper we study recovery conditions of weighted $\ell_1$ minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least $50%$ of the (partial) support information is accurate, then weighted $\ell_1$ minimization is stable and robust under weaker conditions than the analogous conditions for standard $\ell_1$ minimization. Moreover, weighted $\ell_1$ minimization provides better bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic data and real audio and video signals.

Statistical Compressive Sensing of Gaussian Mixture Models by Guoshen Yu, Guillermo Sapiro. The abstract reads:
A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the bound constant can be efficiently calculated. For signals following Gaussian mixture models, SCS with a piecewise linear decoder is introduced and shown to produce for real images better results than conventional CS based on sparse models.

The power of sparse signal coding with learned dictionaries has been demonstrated in a variety of applications and fields, from signal processing to statistical inference and machine learning. However, the statistical properties of these models, such as underfitting or overfitting given sets of data, are still not well characterized in the literature. This work aims at filling this gap by means of the Minimum Description Length (MDL) principle -- a well established information-theoretic approach to statistical inference. The resulting framework derives a family of efficient sparse coding and modeling (dictionary learning) algorithms, which by virtue of the MDL principle, are completely parameter free. Furthermore, such framework allows to incorporate additional prior information in the model, such as Markovian dependencies, in a natural way. We demonstrate the performance of the proposed framework with results for image denoising and classification tasks.
Robust PCA via Outlier Pursuit by: Huan Xu, Constantine Caramanis, Sujay Sanghavi. The abstract reads:
Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted.
We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.
Found thanks to Google:
Feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory. by Wang H, Miao Y, Zhou K, Yu Y, Bao S, He Q, Dai Y, Xuan SY, Tarabishy B, Ye Y, Hu J.
PURPOSE: To investigate the feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory.

METHODS: Two experiments were designed to investigate the feasibility of using reference image based compressed sensing (RICS) technique in DCE-MRI of the breast. The first experiment examined the capability of RICS to faithfully reconstruct uptake curves using undersampled data sets extracted from fully sampled clinical breast DCE-MRI data. An average approach and an approach using motion estimation and motion compensation (ME/MC) were implemented to obtain reference images and to evaluate their efficacy in reducing motion related effects. The second experiment, an in vitro phantom study, tested the feasibility of RICS for improving temporal resolution without degrading the spatial resolution.

RESULTS: For the uptake-curve reconstruction experiment, there was a high correlation between uptake curves reconstructed from fully sampled data by Fourier transform and from undersampled data by RICS, indicating high similarity between them. The mean Pearson correlation coefficients for RICS with the ME/MC approach and RICS with the average approach were 0.977 +/- 0.023 and 0.953 +/- 0.031, respectively. The comparisons of final reconstruction results between RICS with the average approach and RICS with the ME/MC approach suggested that the latter was superior to the former in reducing motion related effects. For the in vitro experiment, compared to the fully sampled method, RICS improved the temporal resolution by an acceleration factor of 10 without degrading the spatial resolution.

CONCLUSIONS: The preliminary study demonstrates the feasibility of RICS for faithfully reconstructing uptake curves and improving temporal resolution of breast DCE-MRI without degrading the spatial resolution.

Behind a paywall: Compressive sensing applied to audio signals using higher-order projections and the spatial domain. by Elizabeth Hoppe and Michael Roan. The abstract reads:
Compressive sensing (CS) is a new approach to data acquisition that is receiving much attention in the current literature. The main goal in CS is to accurately reconstruct a signal from a small number of randomly projected basis functions. The CS reconstruction accuracy hinges on the assumption that the signal can be projected into a sparse domain. A majority of the CS research to date has been focused on the application of CS to image processing. There has, however, been very limited research on the application of CS techniques to audio signals focused in two main areas: the use of the frequency and wavelet domains as the sparse signal domain, and the use of the spatial domain to perform compressive beamforming. In this work, two new approaches are examined. The first is the use of the spatial domain to reconstruct signals instead of simply identifying their direction of arrival. The second is the use of higher-order projection techniques (such as principal component analysis) as a sparse domain. This work will apply these techniques to speech signals to examine the ability of CS to reconstruct wide-band audio signals. In addition, the effect of additive noise on the reconstruction accuracy will be examined.

Finally, at Tsinghua, there was a talk last week:

Wireless Sensor Network Group

Title: Compressive Sensing in Wireless Sensor Networks, Surface Coverage in Wireless Sensor Networks

Speaker: Liwen Xu,Xiaohong Hao

Date: 10:00 -- 11:30 Oct 18, 2010

Venue: FIT 1-203


Compressive Sensing in Wireless Sensor Networks

Compressive Sensing (CS) is an emerging theory that is baed on the fact that a relatively small number of random projections of a signal can contain most of its salient information. CS is now successfully applied in the field of image and video compression. It is obvious that the CS is also attractive to Wireless Sensor Networks (WSN). In this talk, several schemes how CS is applied will be introduced, and we will talk about the future of CS in WSN.

Surface Coverage in Wireless Sensor Networks

Coverage is a fundamental problem in Wireless Sensor Networks (WSNs). Existing studies on this topic focus on 2D ideal plane coverage and 3D full space coverage. The 3D surface of a targeted Field of Interest is complex in many real world applications; and yet, existing studies on coverage do not produce practical results. In this paper, we propose a new coverage model called surface coverage. In surface coverage, the targeted Field of Interest is a complex surface in 3D space and sensors can be deployed only on the surface. We show that existing 2D plane coverage is merely a special case of surface coverage. Simulations point out that existing sensor deployment schemes for a 2D plane cannot be directly applied to surface coverage cases. In this paper, we target two problems assuming cases of surface coverage to be true. One, under stochastic deployment, how many sensors are needed to reach a certain expected coverage ratio? Two, if sensor deployment can be planned, what is the optimal deployment strategy with guaranteed full coverage with the least number of sensors? We show that the latter problem is NP-complete and propose three approximation algorithms. We further prove that these algorithms have a provable approximation ratio. We also conduct comprehensive simulations to evaluate the performance of the proposed algorithms.

Image Credit: NASA/JPL/Space Science Institute
W00065872.jpg was taken on October 23, 2010 and received on Earth October 25, 2010. The camera was pointing toward SATURN at approximately 2,451,860 kilometers away, and the image was taken using the CL1 and RED filters.

Sunday, October 24, 2010

CS: The Compressive Mutliplexer for Multi-Channel Compressive Sensing, Limited Feedback For Cognitive Radio Networks, Phase retrieval and a job

When I read this presentation entitled Compressed sensing with coherent and redundant dictionaries by Deanna Needell, I nearly spilled my coffee when I got to the verbose slide 21....

and the more to the point slide 22.

I know... I am a nerd. Another example for overcomplete dictionary includes voice as illustrated in a recent entry in Bob Sturm's blog entry (and comments). The audio folks are not prepared to use CS hardware because they are already getting good data from their sometimes expensive microphones. I am of the view that a CS based hardware would produce other kind of data than just quaint sound recording.

The recently developed compressive sensing (CS) framework enables the design of sub-Nyquist analog-to-digital converters. Several architectures have been proposed for the acquisition of sparse signals in large swaths of bandwidth. In this paper we consider a more flexible multi-channel signal model consisting of several discontiguous channels where the occupancy of the combined bandwidth of the channels is sparse. We introduce a new compressive acquisition architecture, the compressive multiplexer (CMUX), to sample such signals. We demonstrate that our architecture is CS-feasible and suggest a simple implementation with numerous practical advantages.
In this paper, we consider the downlink of a cognitive radio network where a cognitive base station serves multiple cognitive users on the same frequency band as a group of primary transceivers. The cognitive base station uses an orthogonal scheduling scheme (TDMA/FDMA) to serve its users. For this purpose, the base station is interested in acquiring an estimate of the interference (from the primary network) power at each of its cognitive receivers as a measure of channel quality. This can be surely achieved if we allow for the feedback (from the cognitive receivers to the cognitive base station) bandwidth to scale linearly in the number of cognitive receivers, but in densely populated networks, the cost of such an acquisition might be too high. This leads us to the question of whether we can do better in terms of bandwidth efficiency. We observe that in many scenarios – that are common in practice – where the primary network exhibits sparse changes in transmit powers from one scheduling instant to the next, it is possible to acquire this interference state with only a logarithmic scaling in feedback bandwidth. More specifically, in cognitive networks where the channels are solely determined by the positions of nodes, we can use compressed sensing to recover the interference state. In addition to being a first application of compressed sensing in the domain of limited feedback, to the best of our knowledge, this paper makes a key mathematical contribution concerning the favourable sensing properties of path-loss matrices that are composed of nonzero mean, dependent random entries. Finally, we numerically study the robustness properties of the least absolute shrinkage and selection operator (LASSO), a popular recovery algorithm, under two error models through simulations. The first model considers a varying amount of error added to all entries of the sensing matrix. The second one, a more adversarial model, considers a large amount of error added to only a fraction of the entries of the sensing matrix that are chosen uniformly at random. Simulation results establish that the LASSO recovery algorithm is robust to imperfect channel knowledge.

Behind a paywall:

Phase retrieval of diffraction from highly strained crystals by Marcus C. Newton, Ross Harder, Xiaojing Huang, Gang Xiong, and Ian K. Robinson. The abstract reads:
An important application of phase retrieval methods is to invert coherent x-ray diffraction measurements to obtain real-space images of nanoscale crystals. The phase information is currently recovered from reciprocal-space amplitude measurements by the application of iterative projective algorithms that solve the nonlinear and nonconvex optimization problem. Various algorithms have been developed each of which apply constraints in real and reciprocal space on the reconstructed object. In general, these methods rely on experimental data that is oversampled above the Nyquist frequency. To date, support-based methods have worked well, but are less successful for highly strained structures, defined as those which contain (real-space) phase information outside the range of ±π/2. As a direct result the acquired experimental data is, in general, inadvertently subsampled below the Nyquist frequency. In recent years, a new theory of “compressive sensing” has emerged, which dictates that an appropriately subsampled (or compressed) signal can be recovered exactly through iterative reconstruction and various routes to minimizing the ℓ1 norm or total variation in that signal. This has proven effective in solving several classes of convex optimization problems. Here we report on a “density-modification” phase reconstruction algorithm that applies the principles of compressive sensing to solve the nonconvex phase retrieval problem for highly strained crystalline materials. The application of a nonlinear operator in real-space minimizes the ℓ1 norm of the amplitude by a promotion-penalization (or “propenal”) operation that confines the density bandwidth. This was found to significantly aid in the reconstruction of highly strained nanocrystals. We show how this method is able to successfully reconstruct phase information that otherwise could not be recovered.

Finally, here is a job announcement at King's College, London:

Research Associate Division of Imaging Sciences & Biomedical EngineeringG6/MRE/530/10-HH21/11/2010
SummaryWe are seeking 1 post-doctoral Research Associate for 2 years to participate in a large multi-centre EPSRC funded programme grant (Intelligent imaging: motion, form and function across scale (EPSRC EP/H046410/1), a collaboration between KCL, UCL, Imperial College and The Institute of Cancer Research. The programme focuses on a multidisciplinary medical imaging approach, with the aim of providing clinicians the information they really need. It involves modelling, image processing and registration, and medical imaging physics. This post is part of a work package on improved image reconstruction, in particular MRI reconstruction including motion correction. It will be based in the Division of Imaging Sciences and Biomedical Engineering, a newly created structure in the School of Medicine at King's College London, part of King's Health Partners. It is situated on the 4th floor Lambeth Wing, St Thomas Hospital, one of the leading research hospitals in London. The Division is a unique group made of clinicians, physicists, mathematicians, chemists, biologists, and engineers working on translational research, and it has strong links with industrial partners such as Philips.

The aim of this post is to develop compressed sensing techniques for motion correction in MRI, but with a view of widening applications within the context of intelligent imaging as described by the grant proposal. More generally, the project might involve techniques beyond standard compressed sensing enabling additional prior information to be used. The compressed sensing might be used to measure motion and provide input for other motion correction developed in the lab, or even maybe directly in a generalised motion and under sampling reconstruction framework. In addition, collaboration with novel acquisition and sampling developments for example using parallel transmit techniques are likely.
DetailsSome experience in compressed sensing and MRI is required. The ideal candidate would have good knowledge of cardiac MRI, in particular aspects of acquisition and reconstruction, as well as general image reconstruction and processing techniques. Some experience of parallel MRI and parallel transmit MRI is desirable. This post is part of a large programme grant involving multiple imaging modalities, disciplines (maths, physics, medicine, biology) and institutions (UCL, KCL, Imperial College, Oxford), thus the candidate should also have the personal qualities required to work in such an environment.
SalaryThe salary will be on grade 6: from £30,747 to £33,599 plus £2323 London allowance per annum, depending on experience.
Post durationtwo years
ContactFor an information pack please see further details. Please quote reference G6/MRE/530/10-HH when applying and in all correspondence. Closing date: 17 October 2010 Creating a world-leading Academic Health Sciences Centre. Equality of Opportunity is College Policy.
Further detailsPlease see related Word document

Saturday, October 23, 2010

The Nuit Blanche Effect: A Click-Through Rate Analysis

Every so often I wonder what is the true impact of this blog. One of the variable I care most is whether or not papers featured here will be read or if links will be followed. Up until recently it was difficult to make a good assessment of these traffics but with the new service offered by The Google through it's new URL shortener, we can get statistics of links that have been clicked on. On the other hand, I still don't know if this is one person doing all the clicking :-) but here are the numbers for this entry on the important Xampling paper,

If some of you feel the numbers are small, you have to ask yourself: When was the last time 45 people read your paper after having read the abstract a day after it was released ?  Additionally, my estimate of the people reading the abstracts is about 1200 which puts the click through rate (CTR) to about 3.75%, not bad at all! For comparison, a Google Adsense CTR revolves around 0.1% or about 37 times less. More specifically, about 6 people clicked to download this article in the same hour they were sent an E-mail about it. There are about 350 people on that Email list, so that's about 1.7% for direct mailing and people taking action to download the document after having read the abstract. This is not bad at all. 

Please note also that readers also click on authors' names/web pages, a clear reason why students ought to invest 15 minutes of their time setting up a webpage (as opposed to updating their Facebooks) as it really counts as a living resume..

The subject of a paper obviously constrains the number of people willing to read it. How about a subject like the web pages of the different classes offered in different places on compressive sensing. Nothing Earth shattering but obviously, this is education related and everybody wants to see how others are treating the subject. How many of the1200 people reading a blog entry either from their RSS feed or by coming directly  here, are willing to click on a link with no new information ? Quite a few, it looks like: about 200 or a giant CTR of 16.7%

If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store

Friday, October 22, 2010

CS: The other long post of the week and the CMS meeting abstracts

Today we have a long and incomplete list of new papers for the week-end to read, enjoy.

Zooming In on Microscopic Flow by Remotely Detected MRI by Vikram S. Bajaj, Jeffrey Paulsen, Elad Harel, Alexander Pines. The abstract reads:
Magnetic Resonance Imaging (MRI) can elucidate the interior structure of an optically opaque object in unparalleled detail but is ultimately limited by the need to enclose the object within a detection coil; acquiring the image with increasingly smaller pixels reduces the sensitivity because each pixel occupies a proportionately smaller fraction of the detector’s volume. Here, we overcome this limitation using remotely detected MRI: images of fluids flowing in channel assemblies are encoded into the phase and intensity of the constituent molecules’ nuclear magnetic resonance signals and then decoded by a volume-matched detector after the fluids flow out of the sample. We thus accelerate zoomed in MRI acquisition in microfluidic devices by 106, obtaining microscopic images of flow and velocity distributions. Our results illustrate the facile integration of MRI with microfluidic assays and suggest generalizations to other systems involving microscopic flow.

Robust PCA via Outlier Pursuit by Huan Xu, Constantine Caramanis, Sujay Sanghavi. The abstract reads:
Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted.
We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.

A commenter in the previous entry thought I had missed this one, but fear not, I was running an experiment in the past two days. Stay tuned for the results. Here is the paper:

Sparse and silent coding in neural circuits by András L\Horincz, Zsolt Palotai, Gábor Szirtes. The abstract reads:
Sparse coding algorithms are about finding a linear basis in which signals can be represented by a small number of active (non-zero) coefficients. Such coding has many applications in science and engineering and is believed to play an important role in neural information processing. However, due to the computational complexity of the task, only approximate solutions provide the required efficiency (in terms of time). As new results show, under particular conditions there exist efficient solutions by minimizing the magnitude of the coefficients (`$l_1$-norm') instead of minimizing the size of the active subset of features (`$l_0$-norm'). Straightforward neural implementation of these solutions is not likely, as they require \emph{a priori} knowledge of the number of active features. Furthermore, these methods utilize iterative re-evaluation of the reconstruction error, which in turn implies that final sparse forms (featuring `population sparseness') can only be reached through the formation of a series of non-sparse representations, which is in contrast with the overall sparse functioning of the neural systems (`lifetime sparseness'). In this article we present a novel algorithm which integrates our previous `$l_0$-norm' model on spike based probabilistic optimization for sparse coding with ideas coming from novel `$l_1$-norm' solutions.
The resulting algorithm allows neurally plausible implementation and does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We also demonstrate that the combined method significantly extends the domain where optimal solutions can be found by `$l_1$-norm' based algorithms.

At some point, we will figure out that some neural diseases are because some neural structure are just not competent enough in solving problems the l_1 way but are struck with an l_0 approach.....The day some results like this is published, I'll buy lunch to the authors :-)

Following on a similar thread here is:

Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition by Koray Kavukcuoglu, Marc'Aurelio Ranzato, Yann LeCun. The abstarct reads:
Adaptive sparse coding methods learn a possibly overcomplete set of basis functions, such that natural image patches can be reconstructed by linearly combining a small subset of these bases. The applicability of these methods to visual object recognition tasks has been limited because of the prohibitive cost of the optimization algorithms required to compute the sparse representation. In this work we propose a simple and efficient algorithm to learn basis functions. After training, this model also provides a fast and smooth approximator to the optimal representation, achieving even better accuracy than exact sparse coding algorithms on visual object recognition tasks.

Efficient Matrix Completion with Gaussian Models by Flavien Léger, Guoshen Yu, Guillermo Sapiro. The abstract reads:
A general framework based on Gaussian models and a MAP-EM algorithm is introduced in this paper for solving matrix/table completion problems. The numerical experiments with the standard and challenging movie ratings data show that the proposed approach, based on probably one of the simplest probabilistic models, leads to the results in the same ballpark as the state-of-the-art, at a lower computational cost.
Adaptive Compressed Image Sensing Using Dictionaries by Amir Averbuch, Shai Dekel and Shay Deutsch. The abstract reads:
In recent years, the theory of Compressed Sensing has emerged as an alternative for the Shannon sampling theorem, suggesting that compressible signals can be reconstructed from far fewer samples than required by the Shannon sampling theorem. In fact the theory advocates that non-adaptive, ‘random’ functionals are in some sense optimal for this task. However, in practice Compressed Sensing is very difficult to implement for large data sets, since the algorithms are exceptionally slow and have high memory consumption. In this work, we present a new alternative method for simultaneous image acquisition and compression called Adaptive Compressed Sampling. Our approach departs fundamentally from the (non adaptive) Compressed Sensing mathematical framework by replacing the ‘universal’ acquisition of incoherent measurements with a direct and fast method for adaptive transform coefficient acquisition. The main advantages of this direct approach are that no complex recovery algorithm is in fact needed and that it allows more control over the compressed image quality, in particular, the sharpness of edges. Our experimental results show that our adaptive algorithms perform better than existing non-adaptive methods in terms of image quality and speed.
Random Access Compressed Sensing in Underwater Sensor Networks by Fatemeh Fazel, Maryam Fazel, Milica Stojanovic. The abstract reads:
In this paper, we propose a power-efficient underwater sensor network scheme employing compressed sensing and random channel access. The proposed scheme is suitable for applications where a large number of sensor nodes are deployed uniformly over a certain area to measure a physical phenomenon. The underlying assumption is that most physical phenomena have sparse representations in the frequency domain. The network is assumed to have a Fusion Center (FC) that collects the observations of sensor nodes and reconstructs the measured field based on the obtained measurements. The proposed method is completely decentralized, i.e., sensor nodes act independently without the need for coordination with each other or with the FC. During each frame, a Bernoulli random generator at each node determines whether the node participates in sampling or stays inactive during that sampling period. If selected, it measures the physical quantity of interest, e.g. temperature. A second random generator with a uniform distribution then picks a (random) delay for the node to send its data to the FC. The proposed network scheme, referred to as Random Access Compressed Sensing (RACS), results in a simple power-efficient design, for: a) it eliminates the need for duplexing, which requires coordination from the FC; b) there is no need for acknowledgment packets and retransmissions in case packets collide; and moreover, c) it is efficient in terms of the communication resources used (only a small fraction of nodes sample and transmit in each sampling period).

Static feature-specific imaging (SFSI), where the measurement basis remains fixed/static during the data measurement process, has been shown to be superior to conventional imaging for reconstruction tasks. Here, we describe an adaptive approach that utilizes past measurements to inform the choice of measurement basis for future measurements in an FSI system, with the goal of maximizing the reconstruction fidelity while employing the fewest measurements. An algorithm to implement this adaptive approach is developed for FSI systems, and the resulting systems are referred to as adaptive FSI (AFSI) systems. A simulation study is used to analyze the performance of the AFSI system for two choices of measurement basis: principal component (PC) and Hadamard. Here, the root mean squared error (RMSE) metric is employed to quantify the reconstruction fidelity. We observe that an AFSI system achieves as much as 30% lower RMSE compared to an SFSI system. The performance improvement of the AFSI systems is verified using an experimental setup employed using a digital micromirror device (DMD) array.
If you recall Mark Neifeld made a presentation on the same subject at the Duke-AFRL Meeting a year and a half ago entitled Adaptation for Task-Specific Compressive Imaging that featured this slide:

The attendant video of this presentation is here.

In sensor networks, energy efficient data manipulation/ transmission is very important for data gathering, due to significant power constraints on the sensors. As a potential solution, Compressed Sensing (CS) has been proposed, because it requires capturing a smaller number of samples for successful reconstruction of sparse data. Traditional CS does not take explicitly into consideration the cost of each measurement (it simply tries to minimize the number of measurements), and this ignores the need to transport measurements over the sensor network. In this paper, we study CS approaches for sensor networks that are spatially-localized, thus reducing the cost of data gathering. In particular, we study the reconstruction accuracy properties of a new distributed measurement system
that constructs measurements within spatially-localized clusters. We first introduce the concept of maximum energy overlap between clusters and basis functions ( ), and show that can be used to estimate the minimum number of measurements required for accurate reconstruction. Based on this metric, we propose a centralized iterative algorithm for joint optimization of the energy overlap and distance between sensors in each cluster. Our simulation results show that we can achieve significant savings in transport cost with small reconstruction error.
This one is a little old but here it is: Sequential Compressed Sensing by Dmitry M. Malioutov, Sujay R. Sanghavi, and Alan S. Willsky. The abstract reads:
Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing the achievable performance by bounding the number of samples required for a given level of signal sparsity. However, using these bounds to minimize the number of samples requires a-priori knowledge of the sparsity of the unknown signal, or the decay structure for near-sparse signals. Furthermore, there are some popular recovery methods for which no such bounds are known. In this paper, we investigate an alternative scenario where observations are available in sequence. For any recovery method, this means that there is now a sequence of candidate reconstructions. We propose a method to estimate the reconstruction error directly from the samples themselves, for every candidate in this sequence. This estimate is universal in the sense that it is based only on the measurement ensemble, and not on the recovery method or any assumed level of sparsity of the unknown signal. With these estimates, one can now stop observations as soon as there is reasonable certainty of either exact or sufficiently accurate reconstruction. They also provide a way to obtain “run-time” guarantees for recovery methods that otherwise lack a-priori performance bounds. We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli or Fourier) random measurement ensembles, both for exactly sparse and general near-sparse signals, and with both noisy and noiseless measurements.

The google found the following two presentations:

Finally, Michael Friedlander, Felix Herrmann and Ozgur Yilmaz  have organized the following session at the upcoming 2010 CMS meeting in Vancouver: Compressed Sensing: Theory, Algorithms and Application    There abstracts are listed below:
  • Lorne Applebaum - Multiuser Detection in Asynchronous On--Off Random Access Channels Using Lasso : We consider on--off random access channels where users transmit either a one or a zero to a base station. Such channels represent an abstraction of control channels used for scheduling requests in third-generation cellular systems and uplinks in wireless sensor networks deployed for target detection. A key characteristic of these systems is their asynchronous nature. We will introduce a Lasso-based scheme for multiuser detection in asynchronous on--off random access channels that does not require knowledge of the delays or the instantaneous received signal-to-noise ratios of the individual users at the base station. For any fixed maximum delay in the system, the proposed scheme allows an exponential number of total users with respect to code length---achieving almost the same problem dimension scaling relationships as that required in the ideal case of fully synchronous channels. Further, the computational complexity of the proposed scheme differs from that of a similar oracle-based scheme with perfect knowledge of the user delays by at most a log factor. The results presented here are non-asymptotic, in contrast to previous work for synchronous channels that only guarantees that the probability of error at the base station goes to zero asymptotically with the number of users. Finally, we give a deterministic code construction suitable for the delay agnostic system. The code construction is based on a cyclic code in which equivalence classes are assigned to users. The code's low coherence permits recovery guarantees with Lasso.  
  • Stephen Becker - First-order methods for constrained linear inverse problems : Many algorithms have recently been proposed to solve the unconstrained forms of linear inverse problems, but few algorithms solve the constrained form. We show a general framework for solving constrained problems that applies to all problems of interest to compressed sensing. The technique is based on smoothing and solving the dual formulation. Using this method, it is possible to solve problems such as the Dantzig Selector, or composite problems such as minimizing a combination of the TV norm and weighted 1 norm. Additionally, we discuss recent results about exact regularization and about accelerated continuation.
  • Jim Burke - Sparsity and Nonconvex Nonsmooth Optimization : Sparsity (or parsimonious) optimization is a framework for examining the trade-off between optimality and the number independent variables to optimize over. Much of this framework was first developed for application to a range of problems in statistics where the goal is to explain as much of a data as possible with the fewest number of explanatory variables. Within the past few years, the general methodology of sparsity optimization has found its way into a wide range of applications. In this talk we consider a general sparsity optimization framework for arbitrary extended real-valued objective functions that are continuous on their essential domains. General approximation properties for the associated sparsity optimization problem are given, and a fixed point iteration for the subdifferential inclusion identifying approximate stationarity is developed. Convergence results and numerical experiments are presented.  
  • Mike Davies - Rank Aware Algorithms for Joint Sparse Recovery : This talk will revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation also has links with work on direction of arrival estimation in the field of array signal. Inspired by these links, we consider a new family of MMV algorithms based on the well-know MUSIC method in array processing highlighting the role of the rank of the unknown signal matrix in determining the difficulty of the recovery problem. We will show that the recovery conditions for our new algorithms take into account the observed rank of the signal matrix. This is in contrast to previously proposed techniques such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques which we show to be effectively blind to the rank information. Numerical simulations demonstrate that our new rank aware techniques are significantly better than existing methods in dealing with multiple measurements.  
  • Laurent Demanet - Fitting matrices from applications to random vectors : What can be determined about the inverse A1 of a matrix A from one application of A to a vector of random entries? If the n-by-n inverse A1 belongs to a specified linear subspace of dimension p, then come to the talk to hear which assumptions on this subspace, p, and n, guarantee an accurate recovery of A1 with high probability. This randomized fitting method provides a compelling preconditioner for the wave-equation Hessian (normal operator) in seismic imaging. Joint work with Pierre-David Letourneau (Stanford) and Jiawei Chiu (MIT).  
  • Yonina Eldar - Xampling: Analog-to-digital at Sub-Nyquist rates : Signal processing methods have changed substantially over the last several decades. The number of operations that are shifted from analog to digital is constantly increasing. While technology advances enable mass processing of huge data streams, the acquisition capabilities do not scale sufficiently fast so that the conversion to digital has become a serious bottleneck. For some applications, the maximal frequency of the input signals, which dictates the Nyquist rate, already exceeds the possible rates achievable with existing devices. In this talk, we present a new framework for sampling wideband analog signals at rates far below that dictated by the Nyquist rate. We refer to this methodology as Xampling: A combination of compression and sampling, performed simultaneously. Xampling merges results from standard sampling theory with recent developments in the field of compressed sensing in order to directly sample a wide class of analog signals at very low rates using existing hardware devices.
    We begin by introducing the Xampling methodology. We then consider some specific examples including low rate sampling of multiband signals with applications to cognitive radio, recovery of time delays from low rate samples with applications to ultrasound, and high resolution radar.  
  • Gitta Kutyniok - Geometric Separation by Single-Pass Alternating Thresholding : Modern data is customarily of multimodal nature, and analysis tasks typically require separation into the single components such as, for instance, in neurobiological imaging a separation into spines (pointlike structures) and dendrites (curvilinear structures). Although a highly ill-posed problem, inspiring empirical results show that the morphological difference of these components sometimes allows a very precise separation. In this talk we will present a theoretical study of the separation of a distributional model situation of point- and curvilinear singularities exploiting a surprisingly simple single-pass alternating thresholding method applied to wavelets and shearlets as two complementary frames. Utilizing the fact that the coefficients are clustered geometrically in the chosen frames, we prove that at sufficiently fine scales arbitrarily precise separation is possible. Surprisingly, it turns out that the thresholding index sets even converge to the wavefront sets of the point- and curvilinear singularities in phase space and that those wavefront sets are perfectly separated by the thresholding procedure. Main ingredients of our analysis are the novel notion of cluster coherence and a microlocal analysis viewpoint.
    This is joint work with David Donoho (Stanford U.).  
  • Zhaosong Lu - Penalty Decomposition Methods for Rank and l0-Norm Minimization : In the first part of this talk, we consider general rank minimization problems. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition (PD) methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. In the second part, we consider general l0-norm minimization problems. we first reformulate the l0-norm constrained problem as an equivalent rank minimization problem and then apply the above PD method to solve the latter problem. Further, by utilizing the special structures, we obtain a PD method that only involves vector operations. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD method satisfies a first-order optimality condition that is generally stronger than one natural optimality condition. Finally, we test the performance of our PD methods on matrix completion, nearest low-rank correlation matrix, sparse logistic regression and sparse inverse covariance selection problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed.  
  • Hassan Mansour - Recovery of Compressively Sampled Signals Using Partial Support Information : In this talk, we discuss the recovery conditions of weighted 1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least 50% of the (partial) support information is accurate, then weighted 1 minimization is stable and robust under weaker conditions than the analogous conditions for standard 1 minimization. Moreover, weighted 1 minimization provides better bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic as well as audio and video signals.
  • Ali Pezeshki - Sense and Sensitivity: Model Mismatch in Compressed Sensing : We study the sensitivity of compressed sensing to mismatch between the assumed and the actual models for sparsity. We start by analyzing the effect of model mismatch on the best k-term approximation error, which is central to providing exact sparse recovery guarantees. We establish achievable bounds for the 1 error of the best k-term approximation and show that these bounds grow linearly with the signal dimension and the mismatch level between the assumed and actual models for sparsity. We then derive bounds, with similar growth behavior, for the basis pursuit 1 recovery error, indicating that the sparse recovery may suffer large errors in the presence of model mismatch. Although, we present our results in the context of basis pursuit, our analysis applies to any sparse recovery principle that relies on the accuracy of best k-term approximations for its performance guarantees. We particularly highlight the problematic nature of model mismatch in Fourier imaging, where spillage from off-grid DFT components turns a sparse representation into an incompressible one. We substantiate our mathematical analysis by numerical examples that demonstrate a considerable performance degradation for image inversion from compressed sensing measurements in the presence of model mismatch. \medskip This is joint work with Yuejie Chi, Louis Scharf, and Robert Calderbank.  
  • Holger Rauhut - Recovery of functions in high dimensions via compressive sensing : Compressive sensing predicts that sparse vectors can be recovered efficiently from highly undersampled measurements. It is known in particular that multivariate sparse trigonometric polynomials can be recovered from a small number of random samples. Classical methods for recovering functions in high spatial dimensions usually suffer the curse of dimension, that is, the number of samples scales exponentially in the dimension (the number of variables of the function). We introduce a new model of functions in high dimensions that uses "sparsity with respect to dimensions". More precisely, we assume that the function is very smooth in most of the variables, and is allowed to be rather rough in only a small but unknown set of variables. This translates into a certain sparsity model on the Fourier coefficients. Using techniques from compressive sensing, we are able to recover functions in this model class efficiently from a small number of samples. In particular, this number scales only logarithmically in the spatial dimension - in contrast to the exponential scaling in classical methods.  
  • Benjamin Recht - The Convex Geometry of Inverse Problems : Building on the success of generalizing compressed sensing to matrix completion, this talk discusses progress on further extending the catalog of objects and structures that can be recovered from partial information. I will focus on a suite of data analysis algorithms designed to decompose signals into sums of atomic signals from a simple but not necessarily discrete set. These algorithms are derived in a convex optimization framework that encompasses previous methods based on 1-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices. I will discuss general recovery guarantees and implementation schemes for this suite of algorithms and will describe several example classes of atoms and applications.  
  • Justin Romberg - Random coding for forward modeling : Compressed sensing has shown us that sparse signal acquisition can be made efficient by injecting randomness into the measurement process. In this talk, we will show how these same ideas can be used to dramatically reduce the computation required for two types of simulation problems in acoustics. In the first, we show how all of the channels in a multiple-input multiple-output system can be acquired jointly by simultaneously exciting all of the inputs with different random waveforms, and give an immediate application to seismic forward modeling. In the second, we consider the problem of acoustic source localization in a complicated channel. We show that the amount of computation to perform ``matched field processing'' (matched filtering) can be reduced by precomputing the response of the channel to a small number of dense configurations of random sources.  
  • Rayan Saab - Sobolev Duals of Random Frames and Sigma-Delta Quantization for Compressed Sensing : Compressed sensing, as a signal acquisition technique, has been shown to be highly effective for dimensionality reduction. On the other hand, quantization of compressed sensing measurements has been a relatively under-addressed topic. In particular, the results of Candes, Romberg and Tao, and of Donoho guarantee that if a uniform quantizer of step size δ is used to quantize m measurements y=Φx of a k-sparse signal xRN, where Φ satisfies the restricted isometry property, then the reconstruction error via 1-minimization is O(δ). This is the simplest and most commonly assumed approach for quantization of compressed sensing measurements. On the other hand, in this talk we show that if instead of uniform quantization, an rth order ΣΔ quantization scheme with the same output alphabet is used to quantize y, then there is an alternative recovery method via Sobolev dual frames which guarantees a reduction of the approximation error by a factor of (m/k)(r1/2)α for any 0<α<1, if mrk(logN)1/(1α). The result holds with high probability on the initial draw of the measurement matrix Φ from the Gaussian distribution, and uniformly for all k-sparse signals x that satisfy a mild size condition on their supports.  
  • Thomas Strohmer - How sparsity can enrich wireless communications : We demonstrate how concepts from compressive sensing and random matrix theory can combat some challenging problems of next-generation wireless communication systems.  
  • Ewout van den Berg - Spot - A linear operator toolbox for Matlab : Spot is a Matlab toolbox for the construction, application, and manipulation of linear operators. One of the main achievements of the package is that it provides operators in such a way that they are as easy to work with as explicit matrices, while represented implicitly. This combines the benefits of explicit matrices with the scalability of implicit representations, thus clearing the way for fast prototyping with complex and potentially large linear operators. (This is joint work with Michael Friedlander.)  
  • Rachel Ward - New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property : The Johnson-Lindenstrauss (JL) Lemma states that any set of p points in high dimensional Euclidean space can be embedded into O(δ2log(p)) dimensions, without distorting the distance between any two points by more than a factor between 1δ and 1+δ. We establish a "near-equivalence" between the JL Lemma and the Restricted Isometry Property (RIP), a well-known concept in the theory of sparse recovery often used for showing the success of 1-minimization. In particular, we show that matrices satisfying the Restricted Isometry of optimal order, with randomized column signs, provide optimal Johnson-Lindenstrauss embeddings up to a logarithmic factor in N. Our results have implications for dimensionality reduction and sparse recovery: on the one hand, we arrive at the best known bounds on the necessary JL embedding dimension for a wide class of structured random matrices; on the other hand, our results expose several new families of universal encoding strategies in compressed sensing. This is joint work with Felix Krahmer.  
  • Rebecca Willet - Compressed Sensing with Poisson Noise : Compressed sensing has profound implications for the design of new imaging and network systems, particularly when physical and economic limitations require that these systems be as small and inexpensive as possible. However, several aspects of compressed sensing theory are inapplicable to real-world systems in which noise is signal-dependent and unbounded. In this work we discuss some of the key theoretical challenges associated with the application of compressed sensing to practical hardware systems and develop performance bounds for compressed sensing in the presence of Poisson noise. We develop two novel sensing paradigms, based on either pseudo-random dense sensing matrices or expander graphs, which satisfy physical feasibility constraints. In these settings, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.