How do you know compressed sensing is becoming mainstream ? Just check the comments made about this video.
When I am looking at the log of people coming to the site, I am always amazed at the variety of locations from where the visitors clicked. Here is the one for last year's.
More than 89,000 visits were made to the site this past year. It is about half of the number of visits as the total number of hits since a counter was installed on April 4th, 2007.
More than 89,000 visits were made to the site this past year. It is about half of the number of visits as the total number of hits since a counter was installed on April 4th, 2007.
Hosein Mohimani just let me know that SL0 code now includes the ability to deal with complex numbers. In a recent problem, I had to change the older version so that it would deal with complex numbers but I am glad that there is now an official version of this outstanding solver that can do this. Let us recall that unlike other reconstruction solvers, the Smoothed L_O (SL0) solver converges to the l_0 solution. Thanks Hosein !
I found the following paper relating to rank minimization:
Reweighted Nuclear Norm Minimization with Application to System Identification by Karthik Mohan, Maryam Fazel, The abstract reads:
The matrix rank minimization problem consists of finding a matrix of minimum rank that satisfies given convex constraints. It is NP-hard in general and has applications in control, system identification, and machine learning. Reweighted trace minimization has been considered as an iterative heuristic for this problem. In this paper, we analyze the convergence of this iterative heuristic, showing that the difference between successive iterates tends to zero. Then, after reformulating the heuristic as reweighted nuclear norm minimization, we propose an efficient gradient-based implementation that takes advantage of the new formulation and opens the way to solving largescale problems. We apply this algorithm to the problem of loworder system identification from input-output data. Numerical examples demonstrate that the reweighted nuclear norm minimization makes model order selection easier and results in lower order models compared to nuclear norm minimization without weights.
A Nullspace Analysis of the Nuclear Norm Heuristic for Rank Minimization by Krishnamurthy Dvijotham, Maryam Fazel. The abstract reads:
The problem of minimizing the rank of a matrix subject to linear equality constraints arises in applications in machine learning, dimensionality reduction, and control theory, and is known to be NPhard. A popular heuristic minimizes the nuclear norm (sum of the singular values) of the matrix instead of the rank, and was recently shown to give an exact solution in several scenarios. In this paper, we present a new analysis for this heuristic based on a property of the nullspace of the operator defining the constraints, called the spherical section property. We give conditions for the exact recovery of all matrices up to a certain rank, and show that these conditions hold with high probability for operators generated from random Gaussian ensembles. Our analysis provides simpler proofs than existing isometry-based methods, as well as robust recovery results when the matrix is not exactly low-rank.
I was not much aware of the Rank Minimization Approach to Inpainting work but this paper provided some background: A Rank Minimization Approach to Video Inpainting by Tao Ding, Mario Sznaier and Octavia Camps. The abstract reads:
This paper addresses the problem of video inpainting, that is seamlessly reconstructing missing portions in a set of video frames. We propose to solve this problem proceeding as follows: (i) finding a set of descriptors that encapsulate the information necessary to reconstruct a frame, (ii) finding an optimal estimate of the value of these descriptors for the missing/corrupted frames, and (iii) using the estimated values to reconstruct the frames. The main result of the paper shows that the optimal descriptor estimates can be efficiently obtained by minimizing the rank of a matrix directly constructed from the available data, leading to a simple, computationally attractive, dynamic inpainting algorithm that optimizes the use of spatio/temporal information. Moreover, contrary to most currently available techniques, the method can handle non–periodic target motions, non–stationary backgrounds and moving cameras. These results are illustrated with several examples, including reconstructing dynamic textures and object disocclusion in cases involving both moving targets and camera.
Still in the rank minimization business, we have: Stable Principal Component Pursuit by Zihan Zhou, Xiaodong Li, John Wright, Emmanuel Candes, and Yi Ma. The abstract reads:
RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images, by Yigang Peng, Arvind Ganesh, John Wright, and Yi Ma,
Along these pages, I found that another course features this blog as a source. woohoo!In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.and Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit by Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel Candes, and Yi Ma. The abstract reads:
We consider the problem of recovering a low-rank matrix when some of its entries, whose locations are not known a priori, are corrupted by errors of arbitrarily large magnitude. It has recently been shown that this problem can be solved efficiently and effectively by a convex program named Principal Component Pursuit (PCP), provided that the fraction of corrupted entries and the rank of the matrix are both sufficiently small. In this paper, we extend that result to show that the same convex program, with a slightly improved weighting parameter, exactly recovers the low-rank matrix even if "almost all" of its entries are arbitrarily corrupted, provided the signs of the errors are random. We corroborate our result with simulations on randomly generated matrices and errors.
RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images, by Yigang Peng, Arvind Ganesh, John Wright, and Yi Ma,
This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.The project website is here.
- EE546/STAT593C - Sparse Representations: Theory, Algorithms, and Applications by Maryam Fazel and Marina Meila
Coming back to compressed sensing, here is an interesting experiment: Coherent Single-Detector Imaging System by Maytee Zambrano-Nunez, Edwin A. Marengo, and Jonathan M. Fisher. The abstract reads:
A hybrid opto-microelectromechanical, coherent laser light, single-detector imaging system is demonstrated that applies compressive sensing algorithms for computational imaging of wavefield intensity from a small number of projective measurements of the field. The projective measurements are implemented using spatial light modulators of the digital micromirror device (DMD) family, followed by a geometrical optics-based image casting system to capture the data using a single photodetector. The reconstruction process is based on the new field of compressive sensing which allows the imaging of the main features of the objects under illumination with much less data than a typical CCD camera. The present system expands the scope of single-detector imaging systems based on compressive sensing from the incoherent light regime, which has been the past focus, to the coherent light regime which is key in many biomedical and Homeland security applications.
What is interesting is that while the authors acknowledge that they are dealing with coherent light, they nonetheless rely on the incoherent framework to assume a linear relationship between the in and out intensities of the set-up. The results seem to show that this is OK.
Here is a paper that mirrors another one featured earlier: Compressive Sampling for Streaming Signals with Sparse Frequency Content by Petros Boufounos and M. Salman Asif. The abstract reads:
Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Streaming Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and implementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for realtime system implementations. Our experimental results on very long signals demonstrate the good performance of the SGP and validate our approach.
Finally, you may recall the use of the Cross and Bouquet approach where one uses [A I] to perform decoding of a sparse signal with some dense error. Well, it now looks like encoding the signal with [A I] is becoming useful when encoding gross errors as shown in:
On the systematic measurement matrix for compressed sensing in the presence of gross errors by Zhi Li, Feng Wu and John Wright. The abstract reads:
The proof is here and the attendant slides are here. The next question now is: how do we design hardware that does this type of encoding when one of the point of compressive sensing was to remove the I part in the first place! :)
The following paper appeared on my radar screen but there is no link to the original paper:
Perfusion measurements by micro-CT using prior image constrained compressed sensing (PICCS): initial phantom results. by Nett BE, Brauweiler R, Kalender W, Rowley H, Chen GH. The abstract reads:
Inspired by syndrome source coding using linear error-correcting codes, we explore a new form of measurement matrix for compressed sensing. The proposed matrix is constructed in the systematic form [A I], where A is a randomly generated submatrix with elements distributed according to i.i.d. Gaussian, and I is the identity matrix. In the noiseless setting, this systematic construction retains similar property as the conventional Gaussian ensemble achieves. However, in the noisy setting with gross errors of arbitrary magnitude, where Gaussian ensemble fails catastrophically, systematic construction displays strong stability. In this paper, we prove its stable reconstruction property. We further show its l_1-norm sparsity recovery property by proving its restricted isometry property (RIP). We also demonstrate how the systematic matrix can be used to design a family of lossy-to-lossless compressed sensing schemes where the number of measurements trades off the reconstruction distortions.
The proof is here and the attendant slides are here. The next question now is: how do we design hardware that does this type of encoding when one of the point of compressive sensing was to remove the I part in the first place! :)
The following paper appeared on my radar screen but there is no link to the original paper:
Perfusion measurements by micro-CT using prior image constrained compressed sensing (PICCS): initial phantom results. by Nett BE, Brauweiler R, Kalender W, Rowley H, Chen GH. The abstract reads:
Micro-CT scanning has become an accepted standard for anatomical imaging in small animal disease and genome mutation models. Concurrently, perfusion imaging via tracking contrast dynamics after injection of an iodinated contrast agent is a well-established tool for clinical CT scanners. However, perfusion imaging is not yet commercially available on the micro-CT platform due to limitations in both radiation dose and temporal resolution. Recent hardware developments in micro-CT scanners enable continuous imaging of a given volume through the use of a slip-ring gantry. Now that dynamic CT imaging is feasible, data may be acquired to measure tissue perfusion using a micro-CT scanner (CT Imaging, Erlangen, Germany). However, rapid imaging using micro-CT scanners leads to high image noise in individual time frames. Using the standard filtered backprojection (FBP) image reconstruction, images are prohibitively noisy for calculation of voxel-by-voxel perfusion maps. In this study, we apply prior image constrained compressed sensing (PICCS) to reconstruct images with significantly lower noise variance. In perfusion phantom experiments performed on a micro-CT scanner, the PICCS reconstruction enabled a reduction to 1/16 of the noise variance of standard FBP reconstruction, without compromising the spatial or temporal resolution. This enables a significant increase in dose efficiency, and thus, significantly less exposure time is needed to acquire images amenable to perfusion processing. This reduction in required irradiation time enables voxel-by-voxel perfusion maps to be generated on micro-CT scanners. Sample perfusion maps using a deconvolution-based perfusion analysis are included to demonstrate the improvement in image quality using the PICCS algorithm.
No comments:
Post a Comment