Damaris tells us that the tiles underneath STS-125 are OK. This gives NASA the signal that the Shuttle can go do some good work on the Hubble Space Telescope. Today, another telescope, Herschel, launches with a camera that will implement a Compressive Sensing encoding. Godspeed Herschel!
In other news, we have a presentation with some potential significance: QUAPO : Quantitative Analysis of Pooling in High-Throughput Drug Screening a presentation by Raghu Kainkaryam (a joint work with Anna Gilbert, Paul Shearer and Peter Woolf). For information, Raghu also has a review paper Pooling in High-Throughput Drug Screening with Peter Woolf that you can get by kindly requesting a copy to him.
Finally, I spoke too fast yesterday but this is now ready to go: Yin Zhang just released a reconstruction solver called YALL1 or Your ALgorithms for L1
As stated in the user's guide,
In other news, we have a presentation with some potential significance: QUAPO : Quantitative Analysis of Pooling in High-Throughput Drug Screening a presentation by Raghu Kainkaryam (a joint work with Anna Gilbert, Paul Shearer and Peter Woolf). For information, Raghu also has a review paper Pooling in High-Throughput Drug Screening with Peter Woolf that you can get by kindly requesting a copy to him.
Finally, I spoke too fast yesterday but this is now ready to go: Yin Zhang just released a reconstruction solver called YALL1 or Your ALgorithms for L1
This Matlab code can currently be applied to the following six (6) L1-minimization problems:(BP) min ||x||1 s.t. Ax = b
(L1/L1) min ||x||1 + (1/ν)||Ax - b||1
(L1/L2) min ||x||1 + (1/2ρ)||Ax - b||22
(BP+) min ||x||1 s.t. Ax = b s.t. x \ge 0
(L1/L1+) min ||x||1 + (1/ν)||Ax - b||1 s.t. x \ge 0
(L1/L2+) min ||x||1 + (1/2ρ)||Ax - b||22 s.t. x \ge 0
where A is m by n with m less than n, and the solution x is supposed to be (approximately) sparse. The data and signal, (A, b, x), can be real or complex. Further restrictions on A may apply.
As stated in the user's guide,
YALL1 version beta-3 utilizes a single and simple algorithm to solve all the six models (1)/(6), hence a one-for-six algorithm. It is derived from the classic approach of Alternating Direction Method, or ADM, that minimizes augmented Lagrangian functions through an alternating minimization scheme and updates the multiplier after each sweep. The convergence of such algorithms has been well analyzed in the literature (see [2], for example, and the references thereof). Such algorithms can also be characterized as first-order primal-dual algorithms because they explicitly update both primal and dual variables at every iteration.
For those of you who's first language is not English nor Texan for that matter, y''all is a typical southern contraction to "you all". For instance, instead of saying, "let's get in the truck, we're going fishing" some people might say "y'all get in the truck, we're going fishing" but it can be made more ambiguous in "All y'all get in the truck, we're going fishing". It takes a while to get used to that one because as any reasonable person would argue, there is absolutely no good reason to put an "all" in front of "y'all" since there is already an "all" in "y'all".
Giuseppe seems to have reached the "I-can't-keep-up" stage when it comes to following news in Compressive Sensing, what about you ? Here is the poll
For those of you interested to answer but who cannot see the poll, you may want to check the site directly.
Oopsie, in the presentations at the Sparsity in Machine Learning meeting I forgot to mention the online lecture on Latent Variable Sparse Bayesian Models by David Wipf
ADMiRA: Atomic Decomposition for Minimum Rank Approximation by Kiryung Lee and Yoram Bresler. The abstract reads:
Wiring Diagnostics via ℓ1-Regularized Least Squares by Stefan Schuet. The abstract reads:
Empirical Evaluation of Two Deterministic Sparse Fourier Transforms by Mark Iwen. The abstract reads:
I believe that the sublinear DSFT is the one featured in the Ann Arbor FFT
Joint-sparse recovery from multiple measurements by Ewout van den Berg and Michael Friedlander. The abstract reads:
Matt Herman and Thomas Strohmer produced an important revision to Theorem 2 in General Deviants: An Analysis of Perturbations in Compressed Sensing.
Stefano Marchesini let me know about this meeting on the island of Porquerolles (the program is here). What's up with organizing workshops on islands when in France ? are they scared the folks will run away once they get to the meeting :-)
There is a PhD position at University of Aalborg in Denmark that requires some knowledge of compressive sensing. The title of the research in "Sparse Sampling Techniques for Multi-Mode Receivers".
Finally, Yi Ma will give a talk in Tsinghua today and the same one in Paris on the 20th (Thanks Laurent)
Credit: NASA, the tank of the space shuttle Atltantis on STS-125 is slowly reentering the atmosphere.
Giuseppe seems to have reached the "I-can't-keep-up" stage when it comes to following news in Compressive Sensing, what about you ? Here is the poll
For those of you interested to answer but who cannot see the poll, you may want to check the site directly.
Oopsie, in the presentations at the Sparsity in Machine Learning meeting I forgot to mention the online lecture on Latent Variable Sparse Bayesian Models by David Wipf
ADMiRA: Atomic Decomposition for Minimum Rank Approximation by Kiryung Lee and Yoram Bresler. The abstract reads:
Recht, Fazel, and Parrilo provided an analogy between rank minimization and ℓ0-norm minimization. Subject to the rank restricted isometry property, nuclear norm minimization is a guaranteed algorithm for rank minimization. The resulting semidefinite formulation is a convex problem but in practice the algorithms for it do not scale well to large instances. Instead, we explore missing terms in the analogy and propose a new algorithm which is computationally efficient and also has a performance guarantee. The algorithm is based on the atomic decomposition of the matrix variable and extends the idea in the CoSaMP algorithm for ℓ0-norm minimization. Combined with the recent fast low rank approximation of matrices based on randomization, the proposed algorithm can efficiently handle large scale rank minimization problems.
Wiring Diagnostics via ℓ1-Regularized Least Squares by Stefan Schuet. The abstract reads:
A new method for detecting and locating wiring damage using time domain reflectometry is presented. This method employs existing ℓ1 regularization techniques from convex optimization and compressed sensing to exploit sparsity in the distribution of faults along the length of a wire, while further generalizing commonly used fault detection techniques based on correlation and peak detection. The method’s effectiveness is demonstrated using a simulated example, and it is shown how Monte Carlo techniques are used to tune it to achieve specific detection goals, like a certain false positive error rate. In addition, this method is easily implemented by adapting readily available optimization algorithms to quickly solve large, high resolution, versions of this estimation problem. Finally, the technique is applied to a real data set, which reveals its impressive ability to identify a subtle type of chaffing damage on real wire.
Empirical Evaluation of Two Deterministic Sparse Fourier Transforms by Mark Iwen. The abstract reads:
This paper empirically evaluates a recently proposed Deterministic Sparse Fourier Transform algorithm (hereafter called DSFT) for the first time. Our experiments indicate that DSFT is capable of guaranteed general frequency-sparse signal recovery using subNyquist sampling for realistic bandwidth values. Furthermore, we show that both variants of DSFT have fast reconstruction runtimes. In fact, the sublinear-time DSFT variant is shown to be faster than a traditional Fast Fourier Transform (FFT) for highly-sparse wideband signals.
Joint-sparse recovery from multiple measurements by Ewout van den Berg and Michael Friedlander. The abstract reads:
The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using `1 minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008)] in combination with `1 minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.
Matt Herman and Thomas Strohmer produced an important revision to Theorem 2 in General Deviants: An Analysis of Perturbations in Compressed Sensing.
Stefano Marchesini let me know about this meeting on the island of Porquerolles (the program is here). What's up with organizing workshops on islands when in France ? are they scared the folks will run away once they get to the meeting :-)
There is a PhD position at University of Aalborg in Denmark that requires some knowledge of compressive sensing. The title of the research in "Sparse Sampling Techniques for Multi-Mode Receivers".
Finally, Yi Ma will give a talk in Tsinghua today and the same one in Paris on the 20th (Thanks Laurent)
Credit: NASA, the tank of the space shuttle Atltantis on STS-125 is slowly reentering the atmosphere.
A followup on Southern (not necessarily Texan) grammar:
ReplyDelete"y'all" is the singular
"all y'all" is the plural (obviously!)
Yes, I forgot that one :-)
ReplyDeleteThanks James,
Igor.