We started the week with a tour of some of the blogs in
Around the blogs in 80 hours: Compressive Sensing Edition , then we discovered an implementation of a Noise Aware Analysis Operator Learning for Approximately Cosparse Signals and finally answered some questions on why sparsifying a known signal (adding zeros to it) is a bad idea for sampling. Today we have a slew of papers related to compressive sensing and include two hardware instances, enjoy!

Minimum Complexity Pursuit: Stability Analysis by Shirin Jalali, Arian Maleki, Richard Baraniuk
. The abstract reads:

A host of problems involve the recovery of structured signals from a dimensionality reduced representation such as a random projection; examples include sparse signals (compressive sensing) and low-rank matrices (matrix completion). Given the wide range of different recovery algorithms developed to date, it is natural to ask whether there exist "universal" algorithms for recovering "structured" signals from their linear projections. We recently answered this question in the affirmative in the noise-free setting. In this paper, we extend our results to the case of noisy measurements.

Fast Correlation Computation Method for Matching Pursuit Algorithms in Compressed Sensing by Kee-Hoon Kim, Hosung Park, Seokbeom Hong, Jong-Seon No, Habong Chung . The abstract reads:

There have been many matching pursuit algorithms (MPAs) which handle the sparse signal recovery problem a.k.a. compressed sensing (CS). In the MPAs, the correlation computation step has a dominant computational complexity. In this letter, we propose a new fast correlation computation method when we use some classes of partial unitary matrices as the sensing matrix. Those partial unitary matrices include partial Fourier matrices and partial Hadamard matrices which are popular sensing matrices. The proposed correlation computation method can be applied to almost all MPAs without causing any degradation of their recovery performance. And, for most practical parameters, the proposed method can reduce the computational complexity of the MPAs substantially.

Bandit Theory meets Compressed Sensing for high dimensional Stochastic Linear Bandit by Alexandra Carpentier, Rémi Munos . The abstract reads:

We consider a linear stochastic bandit problem where the dimension $K$ of the unknown parameter $\theta$ is larger than the sampling budget $n$. In such cases, it is in general impossible to derive sub-linear regret bounds since usual linear bandit algorithms have a regret in $O(K\sqrt{n})$. In this paper we assume that $\theta$ is $S-$sparse, i.e. has at most $S-$non-zero components, and that the space of arms is the unit ball for the $||.||_2$ norm. We combine ideas from Compressed Sensing and Bandit Theory and derive algorithms with regret bounds in $O(S\sqrt{n})$.

Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing by Justin Ziniel, Philip Schniter. The abstract reads:

In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.

Progressive compressive imaging from Radon projections by Sergei Evladov, Ofer Levi, and Adrian Stern
. The abstract reads:

In this work we propose a unique sampling scheme of Radon Projections and a non-linear reconstruction algorithm based on compressive sensing (CS) theory to implement a progressive compressive sampling imaging system. The progressive sampling scheme offers online control of the tradeoff between the compression and the quality of reconstruction. It avoids the need of a priori knowledge of the object sparsity that is usually required for CS design. In addition, the progressive data acquisition enables straightforward application of ordered-subsets algorithms which overcome computational constraints associated with the reconstruction of very large images. We present, to the best of our knowledge for the first time, a compressive imaging implementation of megapixel size images with a compression ratio of 20:1.

Recovery of partially occluded objects by applying compressive Fresnel holography by Yair Rivenson, Alon Rot, Sergey Balber, Adrian Stern, and Joseph Rosen. The abstract reads:

A compressive Fresnel holography approach is suggested for the recovery of partially occluded objects. Reconstruction guarantees are analyzed and the effectiveness of the method is demonstrated using simulations and an experimental result showing the reconstruction of a partially occluded resolution chart.

Optical compressive change and motion detection by Yuval Kashter, Ofer Levi, and Adrian Stern. The abstract reads:

Localization information of moving and changing objects, as commonly extracted from video sequences, is typically very sparse with respect to the full data frames, thus fulfilling one of the basic conditions of compressive sensing theory. Motivated by this observation, we developed an optical compressive change and motion-sensing technique that detects the location of moving objects by using a significantly fewer samples than conventionally taken. We present examples of motion detection and motion tracking with over two orders of magnitude fewer samples than required with conventional systems.

Optimal compressed sensing reconstructions of fMRI using deterministic and stochastic sampling geometries by Oliver Jeromin, Marios S Pattichis and Vince D Calhoun. The abstract reads:

BackgroundCompressive sensing can provide a promising framework for accelerating fMRI image acquisition by allowing reconstructions from a limited number of frequency-domain samples. Unfortunately, the majority of compressive sensing studies are based on stochastic sampling geometries that cannot guarantee fast acquisitions that are needed for fMRI. The purpose of this study is to provide a comprehensive optimization framework that can be used to determine the optimal stochastic or deterministic sampling geometry, as well as to provide optimal reconstruction parameter values for guaranteeing image quality in the reconstructed images.

Project Overview:In this project, we consider a signal whose entries are supported on the nodes of a graph. We study the metric to measure the smoothness of signals supported on graphs and provide theoretical explanations for when and why the Laplacian eigenbasis can be regarded as a meaningful "Fourier" transform of such signals. Moreover, we characterize the desired properties of the underlying graph for better compressibility of the signals. For a smooth signal with respect to the graph topology, our work proves that we can gather measurements from a random subset of nodes and then obtain a stable recovery with respect to the graph Laplacian eigenbasis, leveraging ideas from compressed sensing. We also show how such techniques can be used for both temporally and spatially correlated signals sampled by wireless sensor networks. Significant savings are made in terms of energy resources, bandwidth, and query latency by using this approach. All the theoretical analysis and the performance of proposed algorithms are verified using both synthesized data and real world data.

Two publications so far:

Approximating signals supported on graphs by Xiaofan Zhu and Michael Rabbat. The abstract reads:

In this paper, we investigate the notion of smoothness for signals supported on the vertices of a graph. We provide theoretical explanations when and why the Laplacian eigenbasis can be regarded as a meaningful “Fourier” transform of such signals. Moreover, we analyze the desired properties of the underlying graphs for better compressibility of the signals. We verify our theoretical work by experiments on real world data

Graph spectral compressed sensing for sensor networks,by Xiaofan Zhu and Michael Rabbat. The abstract reads:

Consider a wireless sensor network with N sensor nodes measuring data which are correlated temporally or spatially. We consider the problem of reconstructing the original data by only transmitting M N sensor readings while guaranteeing that the reconstruction error is small. Assuming the original signal is “smooth” with respect to the network topology, our approach to gather measurements from a random subset of nodes and then interpolate with respect to the graph Laplacian eigenbasis, leveraging ideas from compressed sensing. We propose algorithms for both temporally and spatially correlated signals, and the performance of these algorithms is veriﬁed using both synthesized data and real world data. Signiﬁcant savings are made in terms of energy resources, bandwidth, and query latency

Density-based group testing by Dániel Gerbner, Balázs Keszegh, Dömötör Pálvölgyi, Gábor Wiener. The abstract reads:

In this paper we study a new, generalized version of the well-known group testing problem. In the classical model of group testing we are given n objects, some of which are considered to be defective. We can test certain subsets of the objects whether they contain at least one defective element. The goal is usually to find all defectives using as few tests as possible. In our model the presence of defective elements in a test set Q can be recognized if and only if their number is large enough compared to the size of Q. More precisely for a test Q the answer is 'yes' if and only if there are at least \alpha |Q| defective elements in Q for some fixed \alpha.of note the following excerpt:

....The concept of group testing was developed in the middle of the previous century. Dorfman, a Swiss physician intended to test blood samples of millions of soldiers during World War II in order to ﬁnd those who were infected by syphilis. His key idea was to test more blood samples at the same time and learn whether at least one of them are infected [3]. Some ﬁfteen years laterIn his model – contrary to Dorfman’s one – not all of the subsets of the possible defectives (electric parts) could be tested [6]. Group testing has now a wide variety of applications in areas like DNA screening, mobile networks, software and hardware testing....Rényi developed a theory of search in order to ﬁnd which electrical part of his car went wrong.

Faster than Nyquist but Slower than Tropp et al by Andrew Harms, Waheed U. Bajwa, Robert Calderbank. The abstract reads:

This paper presents a significant modification to the Random Demodulator (RD) of Tropp et al. for sub-Nyquist sampling of frequency-sparse signals. The modification, termed constrained random demodulator, involves replacing the random waveform integral to the operation of the RD with a constrained random waveform that has limits on its switching rate because fast switching waveforms may be hard to generate cleanly. The result is a relaxation on the hardware requirements with a slight, but manageable, decrease in the recovery guarantees of the RD. The paper also establishes the importance of properly choosing the statistics of the constrained random waveform. If the power spectrum of the random waveform matches the distribution on the tones of the input signal (i.e., the distribution is proportional to the power spectrum), then recovery of the input signal tones is improved. The theoretical guarantees provided in the paper are validated through extensive numerical simulations and phase transition plots.

We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.

Compressed Sensing for Denoising in Adaptive System Identification by Seyed Hossein Hosseini, Mahrokh G. Shayesteh. The abstract reads:

We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.

Gradually Atom Pruning for Sparse Reconstruction and Extension to Correlated Sparsity by Seyed Hossein Hosseini, Mahrokh G. Shayesteh. The abstract reads:

We propose a new algorithm for recovery of sparse signals from their compressively sensed samples. The proposed algorithm benefits from the strategy of gradual movement to estimate the positions of non-zero samples of sparse signal. We decompose each sample of signal into two variables, namely "value" and "detector", by a weighted exponential function. We update these new variables using gradient descent method. Like the traditional compressed sensing algorithms, the first variable is used to solve the Least Absolute Shrinkage and Selection Operator (Lasso) problem. As a new strategy, the second variable participates in the regularization term of the Lasso (l1 norm) that gradually detects the non-zero elements. The presence of the second variable enables us to extend the corresponding vector of the first variable to matrix form. This makes possible use of the correlation matrix for a heuristic search in the case that there are correlations among the samples of signal. We compare the performance of the new algorithm with various algorithms for uncorrelated and correlated sparsity. The results indicate the efficiency of the proposed methods.

An Inversion Formula for Orlicz Norms and Sequences of Random Variables by Soeren Christensen, Joscha Prochno, Stiene Riemer. The abstract reads:

Given an Orlicz function $M$, we show which random variables $\xi_i$, $i=1,...,n$ generate the associated Orlicz norm, i.e., which random variables yield $\mathbb{E} \max\limits_{1\leq i \leq n}|x_i\xi_i| \sim \norm{(x_i)_{i=1}^n}_M$. As a corollary we obtain a representation for the distribution function in terms of $M$ and $M'$ which can be easily applied to many examples of interest.

Joint sparsity offers powerful structural cues for feature selection, especially for variables that are expected to demonstrate a "grouped" behavior. Such behavior is commonly modeled via group-lasso, multitask lasso, and related methods where feature selection is effected via mixed-norms. Several mixed-norm based sparse models have received substantial attention, and for some cases efficient algorithms are also available. Surprisingly, several constrained sparse models seem to be lacking scalable algorithms. We address this deficiency by presenting batch and online (stochastic-gradient) optimization methods, both of which rely on efficient projections onto mixed-norm balls. We illustrate our methods by applying them to the multitask lasso. We conclude by mentioning some open problems.

Video rate Atomic Force Microscopy (AFM) imaging using compressive sensing by Ning Xi ; Ruiguo Yang ; Lai, K.W.C. ; Chengeng Qu. The abstract reads:

Since the paper is behind a paywall, I gathered this other excerpt on the interweb

You might be wondering why this isotropic log-concave random vector is of importance to compressives ensing ? reading Geometric properties of random matrices with independent log-concave rows/columns by Radosław Adamczak might help.

Finally, we have a presentation ACCELERATION OF HIGH ANGULAR AND SPATIAL RESOLUTION DIFFUSION IMAGING USING COMPRESSED SENSING by Merry Mani, Mathews Jacob, Arnaud Guidon, Chunlei Liu, Allen Song, Vincent MagnoGa , Jianhui Zhong

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Atomic Force Microscopy (AFM) is a powerful tool for nano-size imaging. The advantage of AFM is that it can get extraordinary high resolution image at atom level. However, AFM obtains the sample topography image through scanning on the top of sample line by line, therefore it takes couples minutes to get an image and this negative point makes it difficult to continuously observe surface change during manipulation. In this paper, a novel approach for compressive sensing based video rate AFM imaging system is proposed. In this method, compressive sensing is used for sampling topography information of sample surface efficiently. Compressive sensing could use fewer measurements for data sensing to recovery the image through image reconstruction algorithm. This technique decreases the scanning time for AFM scanner because of fewer measurements needed. The video rate for this new approach could reach as high as 1.75 seconds per frame.

Since the paper is behind a paywall, I gathered this other excerpt on the interweb

Atomic Force Microscopy (AFM) is a useful tool in nano scale imaging in both air and liquid. However, an AFM usually takes several minutes to get an image due to the limitation of the scan rate. Therefore it is difficult to use an AFM to observe dynamic changes in real-time. There is an increasing demand on fast imaging AFM system. Hardware improvement might be one of the solutions. But it needs much faster and stable AFM scanner and controller to achieve fast scan. In reality, it is still difficult to obtain high resolution and stable video-rate imaging. In this presentation, a methodology to achieve a video-rate AFM imaging by smart scan will be introduced. Based on the theory of compressive sensing, new scan patterns have been developed. Instead of scanning line by line over the imaging area, a special pattern will be developed based on the characteristics of the image. As a result, it is no longer necessary to scan everywhere in a sample to get an image. The selected scanning will provide sufficient information to recover the entire image. This will significantly reduce the amount of the scanning time and achieve a videorate AFM imaging. Furthermore, the video-rate AFM images can also provide accurate position information of an AFM probe. It can be used to replace the position sensor that usually introduces measurement noises and to enable high-precision motion control. A new non-vector space motion controller for nano-scale manipulation has been developed using the video-rate AFM images. These new imaging and motion control methods have been successfully applied to many applications such as observing chemical synthesis in real time, nano robotic manipulation, and nano assembly

I am still not quite if this is an incoherent measurement or some inpaiting.

from some of the same author, we also have: Compressive Feedback based Non-vector Space Control by Jianguo Zhao, Bo Song,
Ning Xi , King Wai Chiu Lai, Hongzhi Chen, and Chengeng Qu

Abstract—A non-vector space control method based on compressive feedback is presented in this paper. The non-vector space means the control is not performed in traditional vector space but in the space of sets. Consider an initial set and a goal set, a stabilizing controller is designed to make the set dynamics converge to the goal set. The compressive feedback means the controller works even when only partial elements of the feedback set are available; that is, the same controller can still stabilize the set dynamics around the goal set with the compressive feedback. The controller is applied to visual servoing by considering images as sets. In this way, image processing for feature extraction is not required, which is an essential process in conventional servo methods. Moreover, the compressive feedback can reduce the size of feedback image. It is important when the sampling is time consuming such as imaging using atomic force microscopy (AFM). The visual servoing formulation of the controller is further applied to the motion control in nanomanipulations. Simulation results suggest good performance of the controller. The framework proposed in this paper can be extended to other systems where the signals can be represented as sets

.

We give a short proof of a result of G. Paouris on the tail behaviour of the Euclidean norm $|X|$ of an isotropic log-concave random vector $X\in\R^n$, stating that for every $t\geq 1$, $P(|X|\geq ct\sqrt n)\leq \exp(-t\sqrt n)$. More precisely we show that for any log-concave random vector $X$ and any $p\geq 1$, $(E|X|^p)^{1/p}\sim E |X|+\sup_{z\in S^{n-1}}(E | z,X |^p)^{1/p}$.

Finally, we have a presentation ACCELERATION OF HIGH ANGULAR AND SPATIAL RESOLUTION DIFFUSION IMAGING USING COMPRESSED SENSING by Merry Mani, Mathews Jacob, Arnaud Guidon, Chunlei Liu, Allen Song, Vincent MagnoGa , Jianhui Zhong

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment