Pages

Thursday, May 31, 2012

FREAK implementation on Github (1-bit Quantized Difference of Gaussians Descriptor)



A while backPierre let us know about FREAK, Fast Retina Keypoint, a competitor to SIFT. I mentioned that "As soon as the code is there, I'll make a longer mention of it". Today is the day: The paper is FREAK: Fast Retina Keypoint by Alexandre Alahi, Alexandre; Raphael Ortiz, Pierre Vandergheynst. The abstract reads:

A large number of vision applications rely on matching keypoints across images. The last decade featured an arms-race towards faster and more robust keypoints and association algorithms: Scale Invariant Feature Transform (SIFT), Speed-up Robust Feature (SURF), and more recently Binary Robust Invariant Scalable Keypoints (BRISK) to name a few. These days, the deployment of vision algorithms on smart phones and embedded devices with low memory and computation complexity has even upped the ante: the goal is to make descriptors faster to compute, more compact while remaining robust to scale, rotation and noise. To best address the current requirements, we propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Keypoint (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. Our experiments show that FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are thus competitive alternatives to existing keypoints in particular for embedded applications.
From the conclusions:


We have presented a retina-inspired keypoint descriptor to enhance the performance of current image descriptors. It outperforms recent state-of-the-art keypoint descriptors while remaining simple (faster with lower memory load). We do not claim any biological significance but find it remarkable that the used learning stage to identify the most relevant Difference of Gaussians could match one possible understanding of the resource optimization of the human visual system. In fact, as a future work, we want to investigate more on the selection of such relevant pairs for high level applications such as object recognition.



The attendant algorithm is on Gitbub


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

I had to laugh


The partying, the Ferraris, the sex-symbol status, the need to change the world, the unbearable lightness of being is not there but everything else is, is this not good enough ?




Apparently not.





Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Faster Block Sparse Bayesian Learning Implementations



Zhilin sent me the following:

Dear Igor, 
Thank you for introducing my BSBL work in your blog. I have updated the BSBL codes 

The package includes BSBL-EM, BSBL-BO and EBSBL-BO. The codes are much faster than the old versions. Further, the package contains a fold, which has demo files to perform the whole experiment in my telemonitoring post i.e. using BSBL-BO to recover the fetal ECG recordings and then using FastICA to do decomposition. This may be convenient to those people who want to try the experiment but are not familiar with ICA. 
Best,Zhilin

The posts  Zhilin  refers to are:





Thanks Zhilin


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 30, 2012

Tuesday, May 29, 2012

Bound-Optimization based Block Space Bayesian Learning - implementation -



Zhilin just wrote the provocative Is compressed sensing really useful for wireless telemonitoring of non-sparse physiological signals (Part 1)? where he makes the case that in most physiological signals the small stuff is sometimes as important as the large ones and that in effect in most conditions these signals are not sparse or barely compressible. This is exactly what Yonina Eldar was saying at the MIA 2012 meeting when she was talking about compressed beamformining [1] additonal work into sub-Nyquist sampling can be found in [2] and [3]. Zhilin  however takes a different route it seems, i.e. more in line with structured sparsity. Going back to Zhilin's argument:


.....Clearly, we can see, if a compressed sensing algorithm can be widely used in wireless telemonitoring, it must has the ability to recover non-sparse physiological signals, which completely contradicts the sparsity assumption in all the compressed sensing algorithms.
One may ask, there is another way for compressed sensing of non-sparse signals: suppose a non-sparse signal can be represented in a transform domain (e.g. wavelet domains, DCT domains, etc), such that the representation coefficients are sparse, i.e., x = Bz,
where x is the signal, B is the basis of the transform domain, and z is the representation coefficients. Then, the signal x is compressed by, y = Ax,
where y is the compressed signal, and A is the sensing matrix. In the remote terminal, the compressed data y and the matrix product AB are used to recover the sparse coefficients z according to the relation: y = (AB) z. And then the original signal x is recovered by the relation x = Bz.
However, the success of this method relies on the sparsity of the representation coefficients z. Unfortunately, for most physiological signals, z is not sparse enough, and completely recovering z is still a challenge for most compressed sensing algorithms (especially the sensing matrix A is a binary sparse matrix, a requirement of telemonitoring with low energy consumption)....
The paper introducing the Bound-Optimization based Block Space Bayesian Learning is: Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning by Zhilin Zhang, Tzyy-Ping Jung. , Scott Makeig , Bhaskar D. Rao . The abstract reads:
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a low-power wireless body-area network for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing data with low power consumption. However, due to some specific characteristics of FECG recordings such as non-sparsity and strong noise contamination, current CS algorithms generally fail in this application. In this work we utilize the block sparse Bayesian learning (bSBL) framework, a recently developed framework solving the CS problems. To illustrate the ability of the bSBL methods, we apply it to two representative FECG datasets. In one dataset the fetal heartbeat signals are visible, while in the other dataset are barely visible. The experiment results show that the bSBL framework is capable of compressing FECG raw recordings and successfully reconstructing them. These successes rely on two unique features of the bSBL framework; one is the ability to reconstruct less-sparse but structured signals, and the other is the ability to learn and exploit correlation structure of signals to improve performance. These two abilities of the framework greatly enhance the potential use of bSBL in telemonitoring of other physiological signals.


The BSBL-BO algorithm to break through the bottleneck in wireless telemonitoring using compressed sensing is here. Stay tuned to part 2 of this argument: 




[1] Compressed Beamforming in Ultrasound Imaging by Noam Wagner, Yonina C. Eldar, Arie Feuer, Zvi Friedman.

Monday, May 28, 2012

Around the blog in 80 hours: Compressive Sensing and Advanced Matrix Factorization Edition




Since the last "around the blogs in 80 hoursblog entry, I had to answer a pretty common beginner's question in  The answer is still No. We then had a large  Compressive Sensing This Week entry and eventually a stream of papers and implementation on Analysis Operators that started with an implementation featured in Noise Aware Analysis Operator Learning for Approximately Cosparse Signals. Then the subject slided into inpainting with  90% missing pixels and you reconstructed that ?! Analysis Operator Learning and Its Application to Image Reconstruction and  Inpainting Algorithm on GitHub (TV-L2 denoising and inpainting). I ended having to comment on Learning Analysis OperatorsIn the meantime, there is an implementation available for  Incremented Rank PowerFactorization algorithm. Other entries/tweets that caught my attention included the following:

And then some comments:

One of Vladimir Ignatovich's work in ultra-cold neutrons has been trying to figure out why some neutrons at very low energy get absorbed. He has done some analysis on the cross section needed to account for this loss but to my knowledge has never been able to figure it out experimetnally. It remains an open problem it seems. The arxiv blog points to a paper that computes the probability these neutrons go into other worlds. Sigh....How is this Science ? Ok let's imagine they go to to other worlds,.how do you prove that experimentally ? is this reproducible ? after they have disappeared you wait for neutrons to come back ? But if they leak out, there should be some leaking in ? I am shaking my head as I am writing this.

What happened to shoddy journalism ? you don't have to look too far this week.

Finally, The astronauts on the ISS caught a Dragon



Credit photo: ESA/NASA. Opening the SpaceX Dragon's hatch.

Saturday, May 26, 2012

A Comment on Learning Analysis Operators




Hi everyone, "The first approach may yield better results in the inpainting stage but requires a learning phase." This sentence suggests that an operator has to be learned for every individual inpainting task. However, the operator is learned offline and universal, a replacement for the TV, so to speak. It would be interesting to see how a combination of both approaches would perform

I wholeheartedly agree with Martin, if I gave the impression it was an unsurmountable issue then clearly it is not (if you have the time and inclination to perform this calibration). I also agree with him that given the learned analysis operator, one can use verbatim the methods used by Emmanuel to get similar results and it would be great to eventually compare these two approaches. But, to me, one of the most important aspect of Analysis Operator Learning and Its Application to Image Reconstruction by Simon Hawe, Martin Kleinsteuber, Klaus Diepold is.the following: interesting line of thoughts triggered by this:

"...We want to point out that this choice of parameters leads to a suitable general analysis operator, i.e. an operator with sparsifying property for the class of all natural images. It is thinkable that for more specific classes of images, for example medical imagery, other parameters may lead to a more suitable operator. .."
And indeed where would the learning of the analysis operator be of further interest ? How is the analysis operator different from say the TV if it is learned from samples taken 
  • from the same camera with the same lens ? 
  • The same cameras with different lenses ? or different setting for those lenses ?
  • with widely differing specific natural images ?
Most synthetic experiments perform a calibration of the analysis operator with stock (2d) images, how is the analysis operator different
  • with a 3d physical natural scene ? 
  • with scenes taken different kind of sensors like SEM or a random lens imager ? 

In the end , how do we compare those different analysis operators ? What sort of information does it bring to the hardware maker ? to the sensor designer ? to the calibration process ?



Friday, May 25, 2012

Inpainting Algorithm on GitHub (TV-L2 denoising and inpainting)




In light of the recent entry showing the results of an inpainting algorithm within an Analysis Operator Learning approachEmmanuel d'Angelo let me know that he made available his  TV-L2 denoising and inpainting code on Github. The photo above represents another 90% missing pixel reconstruction of Lena. The two approaches are very different in that the analysis operator approach is first learned in the first paper whereas in the case of Emmanuel's code, the analysis operator is given ( Finite Difference/TV). The first approach may yield better results in the inpainting stage but requires a learning phase. 

Emmanuel is still writing up his thesis so he is pretty busy for the moment, hence only the inpainting component of the more general algorithm is available. The general algorithm will include a fast version that can deal with the speed of video streams. According to him 

" The TV-L2 denoising is exactly what is described in the Chambolle-Pock [1], Alg. 1 of the primal-dual paper (no acceleration). The inpainting procedure is not described as is in the paper, but can be considered as a straightforward deduction from this same paper"
On his blog he describes the algorithm in more detail, it is in French however, so let me translate the remarks:


  • The functional used here is nothing remarkable: it is the famous ROF, namely argmin 1/2 ∥ Ax-y∥^2_2 + λ ∥ x ∥ TV, where A is a random mask and TV is Total Variation.
  • The implementation is slightly modified to simulate an infinite parameter λ where the mask is nonzero, and 0 otherwise.
  • The algorithm used is called primal-dual and is described this paper [1].
  • For the Open Lab dys et EPFL, I wrote a demo that used the video stream from a webcam on the Mac, which made quite an impression  (and not just among the general public). It is used by my boss regularly when he wants to impress visitors. The algorithm was FISTA this time (Ref. 2), which converges fast..
  • The demo was initially written in the days of Mac OS X 1.5 Leopard withOpenCV version 1.0. Changes to QuickTime and OpenCV made that demo almost obsolete, so I invite you instead to go to the version of the code on Github.


He added in a follow-up e-mail on the state of the current code on Github

The code is C++ only, and targeted to work with OpenCV 2.4 and later. My optical flow code will be both OpenCV/C++ and GPGPU/OpenCL (it will be a bug free version of the one I presented at ICIP). I also want to add multithreading with Grand Central Dispatch (aka libdispatch), which is an Apple equivalent of Intel TBB. GCD is open sourced and available for Mac OS, iOS, and FreeBSD.

If I have the time to implement a "good" wavelet transform, I will submit it top the main OpenCV module.
Let us stay tuned for the optical flow algorithm on the Github repository that will run at 30 fps. In the meantime, Emmanuel's publications are listed here.


[1] A first-order primal-dual algorithm for convex problems with applications to imaging, Antonin Chambolle et Thomas Pock Journal of Mathematical Imaging and Vision, 2011, Volume 40, Number 1, Pages 120-145
[2] A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Amir Beck and Marc Teboulle, SIAM J. Imaging Sci. 2, 183 (2009), DOI:10.1137/080716542



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

"Open the pod bay doors, Hal."


Dragon being under the shadow on the ISS reminded of a study we did for the Space Shuttle approach using Crtech's SINDA/ThermalDesktop application. It's on CRTech's site: (click on the image below for the animation)


The Station crew is now go for approach. All of it can be watched on NASATV.




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

SpaceX Dragon as viewed from the ISS: Approach in 45 minutes.


It looks like SpaceX Dragon is switching from its primary to its secondary mission, i.e. berth to the International Space Station. More news and update on the SpaceX website (and webcast in 45 minutes). I am sure there will a similar feed at NASA TV but from the station's perspective. Woohoo.




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, May 24, 2012

Incremented Rank PowerFactorization algorithm - implementation-




Justin Haldar sent me the following:

Hi Igor, 
I just came across your Matrix Factorization Jungle website, and thought you might like to list the Incremented Rank PowerFactorization algorithm that Diego Hernando and I published about a while back:(Rank-Constrained Solutions to Linear Matrix Equations Using PowerFactorization )
It's a fast greedy algorithm that solves a rank-constrained version of the matrix recovery inverse problem, and empirically seems to have advantages over convex-relaxation approaches in a number of settings (previously mentioned on Nuit-Blanche:). It's also been used for sparsely-sampled medical imaging reconstruction problems: See:
.....

Here is my response

Justin,

I am torn and you might help me in figuring this thing out.

You may have noticed that the Matrix Factorization Jungle only features implementations that people can download. It is just too difficult to keep track of any and all implementations that are eventually never available (for whatever reason). I would love to feature your solver. Is there anyway you could make something available even if it is at a prototype level (for a small dataset) ? I could then point to you for people who would want something with more strength.


Cheers,

Igor.
If you recall, I had a small rant about this a while back in Nobody Cares About You and Your Algorithm and the argument still stands. Help me help you become a rock star. Some people might not take the message kindly as it goes against some old habit of academia but Justin was not one of these people, he wrote back the next day the following:

Hi Igor,
With your encouragement, I've created a simple stripped-down Matlab implementation that people can download from here:
Cheers,
--
Justin
Thanks Justin, it is now on the Matrix Factorization page! The IRPF page starts with:





Incremented Rank PowerFactorization: Sample Code 
This page provides sample code for a basic Matlab implementation of the Incremented Rank PowerFactorization (IRPF) algorithm.  IRPF is a fast greedy algorithm that solves a rank-constrained version of the matrix recovery problem:
The algorithm was originally described in:
J. P. Haldar, D. Hernando.  “Rank-Constrained Solutions to Linear Matrix Equations using PowerFactorization.” IEEE Signal Processing Letters 16:584-587, 2009.
Slightly more detail about the algorithm can be found in Chapter 5 of:
J. P. Haldar.  “Constrained Imaging: Denoising and Sparse Sampling.” Ph.D. Dissertation, University of Illinois at Urbana-Champaign, May 2011.
[link]
The dissertation also includes application examples and additional references involving the use of IRPF in the context of spatiotemporal magnetic resonance image reconstruction.  The use of IRPF and related methods in medical imaging applications continues to grow (so keep watching the literature!).
A sample Matlab implementation of IRPF is available at the bottom of the page.  There are many ways to implement IRPF, and different implementations can be more or less efficient depending on the problem context.  This particular version uses the iterative conjugate-gradient algorithm for matrix inversion (rather than direct matrix inversion as was described in the original paper).  Some features of this iterative approach are briefly mentioned in a [conference paper] and in the Ph.D. dissertation referenced above. Other versions are available on request.
Sample results are provided below, which can be recreated using sample code provided alongside the IRPF implementation.





90% missing pixels and you reconstructed that ?! Analysis Operator Learning and Its Application to Image Reconstruction

I missed the following paper in yesterday's Compressive Sensing This Week review but one of the author, Simon Hawe, kindly mentioned it to me:



Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator.
In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques.

Simon also showed me what the Omega operators looked like after the learning process as I wanted to know how far they were from a simple derivative-TV operator. Here is what he had to say:

 ....On Page 12 Figure 3 you see the atoms of the operator we have learned within all our experiments [see above] . They look more like higher order derivatives than first order derivatives as in the TV case. We played a little bit around with the optimization, and if we introduce a prior which enforces the atoms themselves to be sparse than the output does highly look like TV, see the attached image.[see below]





After the Noise Aware Analysis Operator Learning for Approximately Cosparse Signals featured earlier this week, the collection of approach to learning these analysis operator is growing.  Simon  tells me their implementation should be out soon. A new era where the fields of signal processing and mathematical physics are colliding, is born.


and yes, 90 percent missing pixels!

Thanks Simon



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 23, 2012

Compressive Sensing This Week

We started the week with a tour of some of the blogs in  Around the blogs in 80 hours: Compressive Sensing Edition , then we discovered an implementation of a Noise Aware Analysis Operator Learning for Approximately Cosparse Signals and finally answered some questions on why sparsifying a known signal (adding zeros to it) is a bad idea for sampling. Today we have a slew of papers related to compressive sensing and include two hardware instances, enjoy!

A host of problems involve the recovery of structured signals from a dimensionality reduced representation such as a random projection; examples include sparse signals (compressive sensing) and low-rank matrices (matrix completion). Given the wide range of different recovery algorithms developed to date, it is natural to ask whether there exist "universal" algorithms for recovering "structured" signals from their linear projections. We recently answered this question in the affirmative in the noise-free setting. In this paper, we extend our results to the case of noisy measurements.

Fast Correlation Computation Method for Matching Pursuit Algorithms in Compressed Sensing by Kee-Hoon Kim, Hosung Park, Seokbeom Hong, Jong-Seon No, Habong Chung . The abstract reads: 

There have been many matching pursuit algorithms (MPAs) which handle the sparse signal recovery problem a.k.a. compressed sensing (CS). In the MPAs, the correlation computation step has a dominant computational complexity. In this letter, we propose a new fast correlation computation method when we use some classes of partial unitary matrices as the sensing matrix. Those partial unitary matrices include partial Fourier matrices and partial Hadamard matrices which are popular sensing matrices. The proposed correlation computation method can be applied to almost all MPAs without causing any degradation of their recovery performance. And, for most practical parameters, the proposed method can reduce the computational complexity of the MPAs substantially.


We consider a linear stochastic bandit problem where the dimension $K$ of the unknown parameter $\theta$ is larger than the sampling budget $n$. In such cases, it is in general impossible to derive sub-linear regret bounds since usual linear bandit algorithms have a regret in $O(K\sqrt{n})$. In this paper we assume that $\theta$ is $S-$sparse, i.e. has at most $S-$non-zero components, and that the space of arms is the unit ball for the $||.||_2$ norm. We combine ideas from Compressed Sensing and Bandit Theory and derive algorithms with regret bounds in $O(S\sqrt{n})$.

Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing by Justin Ziniel, Philip Schniter. The abstract reads:
In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.






In this work we propose a unique sampling scheme of Radon Projections and a non-linear reconstruction algorithm based on compressive sensing (CS) theory to implement a progressive compressive sampling imaging system. The progressive sampling scheme offers online control of the tradeoff between the compression and the quality of reconstruction. It avoids the need of a priori knowledge of the object sparsity that is usually required for CS design. In addition, the progressive data acquisition enables straightforward application of ordered-subsets algorithms which overcome computational constraints associated with the reconstruction of very large images. We present, to the best of our knowledge for the first time, a compressive imaging implementation of megapixel size images with a compression ratio of 20:1.

Recovery of partially occluded objects by applying compressive Fresnel holography by Yair Rivenson, Alon Rot, Sergey Balber, Adrian Stern, and Joseph Rosen. The abstract reads:
A compressive Fresnel holography approach is suggested for the recovery of partially occluded objects. Reconstruction guarantees are analyzed and the effectiveness of the method is demonstrated using simulations and an experimental result showing the reconstruction of a partially occluded resolution chart.

Localization information of moving and changing objects, as commonly extracted from video sequences, is typically very sparse with respect to the full data frames, thus fulfilling one of the basic conditions of compressive sensing theory. Motivated by this observation, we developed an optical compressive change and motion-sensing technique that detects the location of moving objects by using a significantly fewer samples than conventionally taken. We present examples of motion detection and motion tracking with over two orders of magnitude fewer samples than required with conventional systems.

Optimal compressed sensing reconstructions of fMRI using deterministic and stochastic sampling geometries by Oliver Jeromin, Marios S Pattichis and Vince D Calhoun. The abstract reads:

Background
Compressive sensing can provide a promising framework for accelerating fMRI image acquisition by allowing reconstructions from a limited number of frequency-domain samples. Unfortunately, the majority of compressive sensing studies are based on stochastic sampling geometries that cannot guarantee fast acquisitions that are needed for fMRI. The purpose of this study is to provide a comprehensive optimization framework that can be used to determine the optimal stochastic or deterministic sampling geometry, as well as to provide optimal reconstruction parameter values for guaranteeing image quality in the reconstructed images.

I just came across the following project page: on Graph Spectral Compressed Sensing

Project Overview:
In this project, we consider a signal whose entries are supported on the nodes of a graph. We study the metric to measure the smoothness of signals supported on graphs and provide theoretical explanations for when and why the Laplacian eigenbasis can be regarded as a meaningful "Fourier" transform of such signals. Moreover, we characterize the desired properties of the underlying graph for better compressibility of the signals. For a smooth signal with respect to the graph topology, our work proves that we can gather measurements from a random subset of nodes and then obtain a stable recovery with respect to the graph Laplacian eigenbasis, leveraging ideas from compressed sensing. We also show how such techniques can be used for both temporally and spatially correlated signals sampled by wireless sensor networks. Significant savings are made in terms of energy resources, bandwidth, and query latency by using this approach. All the theoretical analysis and the performance of proposed algorithms are verified using both synthesized data and real world data.




Two publications so far:
 Approximating signals supported on graphs by Xiaofan Zhu and Michael Rabbat. The abstract reads:
In this paper, we investigate the notion of smoothness for signals supported on the vertices of a graph. We provide theoretical explanations when and why the Laplacian eigenbasis can be regarded as a meaningful “Fourier” transform of such signals. Moreover, we analyze the desired properties of the underlying graphs for better compressibility of the signals. We verify our theoretical work by experiments on real world data
Consider a wireless sensor network with N sensor nodes measuring data which are correlated temporally or spatially. We consider the problem of reconstructing the original data by only transmitting M N sensor readings while guaranteeing that the reconstruction error is small. Assuming the original signal is “smooth” with respect to the network topology, our approach to gather measurements from a random subset of nodes and then interpolate with respect to the graph Laplacian eigenbasis, leveraging ideas from compressed sensing. We propose algorithms for both temporally and spatially correlated signals, and the performance of these algorithms is verified using both synthesized data and real world data. Significant savings are made in terms of energy resources, bandwidth, and query latency


Density-based group testing by Dániel Gerbner, Balázs Keszegh, Dömötör Pálvölgyi, Gábor Wiener. The abstract reads:
In this paper we study a new, generalized version of the well-known group testing problem. In the classical model of group testing we are given n objects, some of which are considered to be defective. We can test certain subsets of the objects whether they contain at least one defective element. The goal is usually to find all defectives using as few tests as possible. In our model the presence of defective elements in a test set Q can be recognized if and only if their number is large enough compared to the size of Q. More precisely for a test Q the answer is 'yes' if and only if there are at least \alpha |Q| defective elements in Q for some fixed \alpha.
of note the following excerpt:
....The concept of group testing was developed in the middle of the previous century. Dorfman, a Swiss physician intended to test blood samples of millions of soldiers during World War II in order to find those who were infected by syphilis. His key idea was to test more blood samples at the same time and learn whether at least one of them are infected [3]. Some fifteen years later Rényi developed a theory of search in order to find which electrical part of his car went wrong. In his model – contrary to Dorfman’s one – not all of the subsets of the possible defectives (electric parts) could be tested [6]. Group testing has now a wide variety of applications in areas like DNA screening, mobile networks, software and hardware testing....



This paper presents a significant modification to the Random Demodulator (RD) of Tropp et al. for sub-Nyquist sampling of frequency-sparse signals. The modification, termed constrained random demodulator, involves replacing the random waveform integral to the operation of the RD with a constrained random waveform that has limits on its switching rate because fast switching waveforms may be hard to generate cleanly. The result is a relaxation on the hardware requirements with a slight, but manageable, decrease in the recovery guarantees of the RD. The paper also establishes the importance of properly choosing the statistics of the constrained random waveform. If the power spectrum of the random waveform matches the distribution on the tones of the input signal (i.e., the distribution is proportional to the power spectrum), then recovery of the input signal tones is improved. The theoretical guarantees provided in the paper are validated through extensive numerical simulations and phase transition plots.

Compressed Sensing for Denoising in Adaptive System Identification by Seyed Hossein Hosseini, Mahrokh G. Shayesteh. The abstract reads:
We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.

Compressed Sensing for Denoising in Adaptive System Identification by Seyed Hossein Hosseini, Mahrokh G. Shayesteh. The abstract reads:
We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.

Gradually Atom Pruning for Sparse Reconstruction and Extension to Correlated Sparsity by Seyed Hossein Hosseini, Mahrokh G. Shayesteh. The abstract reads:

We propose a new algorithm for recovery of sparse signals from their compressively sensed samples. The proposed algorithm benefits from the strategy of gradual movement to estimate the positions of non-zero samples of sparse signal. We decompose each sample of signal into two variables, namely "value" and "detector", by a weighted exponential function. We update these new variables using gradient descent method. Like the traditional compressed sensing algorithms, the first variable is used to solve the Least Absolute Shrinkage and Selection Operator (Lasso) problem. As a new strategy, the second variable participates in the regularization term of the Lasso (l1 norm) that gradually detects the non-zero elements. The presence of the second variable enables us to extend the corresponding vector of the first variable to matrix form. This makes possible use of the correlation matrix for a heuristic search in the case that there are correlations among the samples of signal. We compare the performance of the new algorithm with various algorithms for uncorrelated and correlated sparsity. The results indicate the efficiency of the proposed methods.

An Inversion Formula for Orlicz Norms and Sequences of Random Variables by Soeren Christensen, Joscha Prochno, Stiene Riemer. The abstract reads:
Given an Orlicz function $M$, we show which random variables $\xi_i$, $i=1,...,n$ generate the associated Orlicz norm, i.e., which random variables yield $\mathbb{E} \max\limits_{1\leq i \leq n}|x_i\xi_i| \sim \norm{(x_i)_{i=1}^n}_M$. As a corollary we obtain a representation for the distribution function in terms of $M$ and $M'$ which can be easily applied to many examples of interest.

Fast projections onto mixed-norm balls with applications by Suvrit Sra
Joint sparsity offers powerful structural cues for feature selection, especially for variables that are expected to demonstrate a "grouped" behavior. Such behavior is commonly modeled via group-lasso, multitask lasso, and related methods where feature selection is effected via mixed-norms. Several mixed-norm based sparse models have received substantial attention, and for some cases efficient algorithms are also available. Surprisingly, several constrained sparse models seem to be lacking scalable algorithms. We address this deficiency by presenting batch and online (stochastic-gradient) optimization methods, both of which rely on efficient projections onto mixed-norm balls. We illustrate our methods by applying them to the multitask lasso. We conclude by mentioning some open problems.

Video rate Atomic Force Microscopy (AFM) imaging using compressive sensing by Ning Xi ; Ruiguo Yang ; Lai, K.W.C. ; Chengeng Qu. The abstract reads:
Atomic Force Microscopy (AFM) is a powerful tool for nano-size imaging. The advantage of AFM is that it can get extraordinary high resolution image at atom level. However, AFM obtains the sample topography image through scanning on the top of sample line by line, therefore it takes couples minutes to get an image and this negative point makes it difficult to continuously observe surface change during manipulation. In this paper, a novel approach for compressive sensing based video rate AFM imaging system is proposed. In this method, compressive sensing is used for sampling topography information of sample surface efficiently. Compressive sensing could use fewer measurements for data sensing to recovery the image through image reconstruction algorithm. This technique decreases the scanning time for AFM scanner because of fewer measurements needed. The video rate for this new approach could reach as high as 1.75 seconds per frame.

Since the paper is behind a paywall, I gathered this other excerpt on the interweb

Atomic Force Microscopy (AFM) is a useful tool in nano scale imaging in both air and liquid.  However, an AFM usually takes several minutes to get an image due to the limitation of the scan rate. Therefore it is difficult to use an AFM to observe dynamic  changes in real-time. There is an increasing demand on fast imaging AFM system. Hardware improvement might be one of the solutions. But it needs much faster and stable AFM scanner and controller to achieve fast scan. In reality, it is still difficult to obtain high  resolution and stable video-rate imaging.  In this presentation, a methodology to achieve a video-rate AFM imaging by smart scan will be introduced. Based on the theory of compressive sensing, new scan patterns have been developed. Instead of scanning line by line over the imaging area, a special pattern will be developed based on the characteristics of the image. As a result, it is no longer necessary to scan everywhere in a sample to get an image. The selected scanning will provide sufficient information to recover the entire image. This will significantly reduce the amount of the scanning time and achieve a videorate AFM imaging. Furthermore, the video-rate AFM images can also provide accurate position information of an AFM probe. It can be used to replace the position sensor that usually introduces measurement noises and to enable high-precision motion control. A new non-vector space motion controller for nano-scale manipulation has been developed using the video-rate AFM images. These new imaging and motion control methods have been successfully applied to many applications such as observing chemical synthesis in real time, nano robotic manipulation, and nano assembly

I am still not quite if this is an incoherent measurement or some inpaiting.

from some of the same author, we also have:  Compressive Feedback based Non-vector Space Control by Jianguo Zhao, Bo Song, Ning Xi , King Wai Chiu Lai, Hongzhi Chen, and Chengeng Qu
Abstract—A non-vector space control method based on compressive feedback is presented in this paper. The non-vector space means the control is not performed in traditional vector space but in the space of sets. Consider an initial set and a goal set, a stabilizing controller is designed to make the set dynamics converge to the goal set. The compressive feedback means the controller works even when only partial elements of the feedback set are available; that is, the same controller can still stabilize the set dynamics around the goal set with the compressive feedback. The controller is applied to visual servoing by considering images as sets. In this way, image processing for feature extraction is not required, which is an essential process in conventional servo methods. Moreover, the compressive feedback can reduce the size of feedback image. It is important when the sampling is time consuming such as imaging using atomic force microscopy (AFM). The visual servoing formulation of the controller is further applied to the motion control in nanomanipulations. Simulation results suggest good performance of the controller. The framework proposed in this paper can be extended to other systems where the signals can be represented as sets
.
We give a short proof of a result of G. Paouris on the tail behaviour of the Euclidean norm $|X|$ of an isotropic log-concave random vector $X\in\R^n$, stating that for every $t\geq 1$, $P(|X|\geq ct\sqrt n)\leq \exp(-t\sqrt n)$. More precisely we show that for any log-concave random vector $X$ and any $p\geq 1$, $(E|X|^p)^{1/p}\sim E |X|+\sup_{z\in S^{n-1}}(E |  z,X |^p)^{1/p}$.

You might be wondering why this isotropic log-concave random vector is of importance to compressives ensing ? reading Geometric properties of random matrices with independent log-concave rows/columns by Radosław Adamczak  might help.


Finally, we have a presentation ACCELERATION OF HIGH ANGULAR AND SPATIAL RESOLUTION DIFFUSION IMAGING USING COMPRESSED SENSING by Merry Mani, Mathews Jacob, Arnaud Guidon, Chunlei Liu, Allen Song, Vincent MagnoGa , Jianhui Zhong



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.