Page Views on Nuit Blanche since July 2010
Nuit Blanche community
@NuitBlog || Facebook || Reddit
Compressive Sensing on LinkedIn
Advanced Matrix Factorization on Linkedin ||
Thursday, December 31, 2009
Thank you !
You know what they say about traditions in Aggieland: "Do something twice and it's a tradition." Mirroring last year's Thank you post, here is this year's. First, in 2009 the readership has increased and has mostly shifted from direct hits on the website to feedreaders.
The tone of the blog has probably changed a little as I mostly show new preprints without discussing them. As it stands, I try to balance the need to have a place where the latest and greatest is shared rapidly throughout the community and my need to digest all this information by trying to understand what is new and what is not. In order to digest this information, y'all are being amazingly kind to me with my dumb questions. I framed these into Q&A's and posted them on the blog, here is the list of the people who kindly obliged in this exercise:
- Laurent Jacques, Yves Wiaux and Pierre Vandergheynst, in CS: SparSA, Q/A, CS and Random Arrays, KF for reconstruction, Inferring Rankings Under Constrained Sensing, Post-silicon timing, a course and SSP09
- Jared Tanner and Remi Gribonval in CS: RIP vs NSP, Clarifications by Jared Tanner and Remi Gribonvall
- Dror Baron in CS: A small discussion with Dror Baron
- Andres Asensio Ramos in CS: Compressive Sensing for Spectroscopy and Polarimetry, Q&A with Andres Asensio Ramos, Some news from HERSCHEL/PACS
- Dror Baron and Dongning Guo in CS: LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing, A Q&A with Dror Baron and Dongning Guo/ Two jobs
- Michael Lustig in CS: Combining Parallel Imaging and Compressed Sensing, A short discussion with Michael Lustig, and CS: Q/A, Accelerating SENSE Using CS, Regularized SENSE thru Bregman Iterations, SparseSENSE, Bregmanized Nonlocal Regularization for Deconvolution
- Waheed Bajwa in CS: Answering a DSN question, Distilled Sensing, Modified-CS: Modifying CS for Problems with Partially Known Support, L1 and Hamilton-Jacobi
Something new and unexpected happened this past year as well and no it wasn't a list of people interested in compressive sensing on Twitter but rather the ability of the blog to provide some new thrust of thoughts as witnessed in the Ghost Imaging experiment of Ori Katz and Yaron Bromberg leading to the Compressive Ghost Imaging paper or in the inclusion of some thoughts in the review entitled: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? by Xiaochuan Pan, Emil Sidky and Michael Vannier. As it happens, Richard Gordon the inventor of ART for CT-scanners responded to that review in this entry: MIA'09 and Richard Gordon's ART CT algorithm. I am sure this is the beginning of a long discussion. Thank you Xiaochuan Pan, Emil Sidky and Michael Vannier for getting the ball rolling. I hope that entries featured in the "These technologies do not exist" series will provide similar impetus.
Others have contributed to the blog and made it better as a result, thank you to them. Here is a list (in no particular order):
- Laurent Jacques in CS: Inline hologram reconstruction with sparsity constraints, Reading the Book of Memory, CS: LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing, A Q&A with Dror Baron and Dongning Guo/ Two jobs, CS: Wavefront Coding for Random Lens Imagers ?, CS: Sparse Reconstruction of Complex Signals in Compressed Sensing Terahertz Imaging, Compressive Sensing for Sparsely Excited Speech Signals
- Mário Figueiredo in CS: SparSA, Q/A, CS and Random Arrays, KF for reconstruction, Inferring Rankings Under Constrained Sensing, Post-silicon timing, a course and SSP09, CS: Robust PCA, C-SALSA is released, KAUST Jobs, CS: Some basic CS examples, C-SALSA, Distributed Sensing, Dictionary learning, Replica Method and Applications to CS, SLEP
- Angshul Majumdar in CS: Different codes, Recovery of clustered sparse signals, Theory of CS, Stagewise Polytope Faces Pursuit, 2-D tomography, CS:OLS/ROLS/StOLS, Group sparsity, Poisson CS, CS of parameterized shapes, IT meets free discontinuity, An homotopy algorithm for the Lasso, CS: MIA, More SpaRSA, Kalman Filter-CS, Compressed sensing and sparse recovery in exploration seismology, Solving Helmholtz, CS: LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing, A Q&A with Dror Baron and Dongning Guo/ Two jobs
- Jon Dattorro in CS: Convex iteration method, Statistical Estimation in High Dimension,, Super-resolution far-field ghost imaging via compressive sampling
- Yaniv Erlich in CS: Compressed Genotyping, DNA Sudoku - Harnessing high throughput sequencing for multiplexed specimen analysis
- David Gross in CS: Recovering low-rank matrices from few coefficients in any basis, Probability of Unique Integer Solution to a System of Linear Equations
- Alejandro Weinstein in CS: Some basic CS examples, C-SALSA, Distributed Sensing, Dictionary learning, Replica Method and Applications to CS, SLEP
- Pierre Borgnat in CS: ExCoV: Expansion-Compression Variance-component Based Sparse-signal Reconstruction, CS approach to time-frequency localization
- Brien Housand in CS: Dequantizing Compressed Sensing with Non-Gaussian Constraints, A request for Collaboration.
- Noam Shental in CS: Rare-Allele Detection Using Compressed Se(que)nsing, Shifted Transversal Design smart-pooling for high coverage interactome mapping
- Yi Ma in CS: A Matrix Completion Entry
- Ben Moran in CS: This week's long entry
- Tanish Agrawal in CS: This week's long entry
- Wenlin Gong in CS: Recovering low-rank matrices from few coefficients in any basis, Probability of Unique Integer Solution to a System of Linear Equations
- Petros Boufounos in CS: Sparsity, Randomness and Compressed Sensing, Spatio‐Temporal Compressive Sensing
- Jean-Luc Starck in CS: Compressive Sensing for Spectroscopy and Polarimetry, Q&A with Andres Asensio Ramos, Some news from HERSCHEL/PACS
- Giuseppe Paleologo in CS: Food for thoughts for the week-end.
- Wotao Yin in CS: ISD, A New Compressed Sensing Algorithm via Iterative Support Detection (and NSP vs RIP)
- Lianlin Li in CS: The Compressed-Sampling Filter (CSF)
- Daryl Lim, in CS: This week's long entry
- Michael Friedlander in Spot: A Linear Operator Toolbox
- Ewout van der Berg in Spot: A Linear Operator Toolbox
- Lei Yu, in CS: LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing, A Q&A with Dror Baron and Dongning Guo/ Two jobs
- Dirk Lorenz in CS: LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing, A Q&A with Dror Baron and Dongning Guo/ Two jobs
- Alexandros Dimakis in CS: LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing, A Q&A with Dror Baron and Dongning Guo/ Two jobs
- Julien Mairal, in CS:SPAMS, SPArse Modeling Software, a dictionary building facility and more...
- Jerome Darbon in CS: Update on "A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures", Saturn Equinox
- Sean O'Connor in CS: Various thresholds for $\ell_1$-optimization in CS, Radial K-t FOCUSS MRI, BBCS algorithm, block-sparse CS, Non Convex CS, OCT
- Tomoya Sakai in CS: CSBP, CoSAMP, Freedom through Imperfection: Exploiting the flexibility offered by redundancy in signal processing, Lossy Speech Compression Via CS
- Pawan Baheti in CS: Irregular tiling in retina, Recognition using information-optimal adaptive feature-specific imaging, other CS hardware encoding.
- Matthew Herman in CS: An answer to a question, Statistical RIP and Semi-Circle Distribution of Incoherent Dictionaries, Learning with Structured Sparsity, CS: Compressive Sensing Hardware Update, Compressive estimation , General Deviants, a conference, Adaptive CS, 2 graduate asstships.
- Piotr Indyk in CS: Sequential Sparse Matching Pursuit, Compressed Blind De-convolution
- Martin Braun in CS: Spectral Estimation and Compressive Sensing GNU Radio release, BCS New Release, CS can make you rich
- Dirk Lorenz in CS: KF-CS code, Weighted l1 with prior, Comment on CGD, Cognitive Radio, a Postdoc, TSW-CS, Impact and Trends and a Video, CS: Different codes, Recovery of clustered sparse signals, Theory of CS, Stagewise Polytope Faces Pursuit, 2-D tomography
- Gabriel Peyré in CS: Compressive dual photography, Image reconstruction by deterministic CS, Nesterov algorithm, CS: Chambolle's Algorithm for the Resolution of Compressed Sensing with TV regularization
- Thomas Strohmer, CS: calendar, CS Block Map-LMS Adaptive Filter, Democracy in Action, and more, CS: CS-GPR prototype, CS-CT, SML presentations related to CS,
- Jong Chul Ye in CS: Various thresholds for $\ell_1$-optimization in CS, Radial K-t FOCUSS MRI, BBCS algorithm, block-sparse CS, Non Convex CS, OCT
- David Hammond in CS: Basis Pursuit DeQuantizer, BPDQ Toolbox now available.
- Weiyu Xu in CS: Analysis for Iterative Reweighted l_1, Projection proximal-point algorithm,Subsampling Algorithms, Imaging of moving targets with multi-static SAR
- Waheed Bajwa in CS: On the value of Distributed Sensor Networks, More on the Labview Reconstruction algorithm
- Andrew Yagle in CS: Matrix Completion with Noise, Computational Ghost Imaging and CS ?, On uncertainty principles in the finite dimensional setting
- Jarvis Haupt in CS: Answering a DSN question, Distilled Sensing, Modified-CS: Modifying CS for Problems with Partially Known Support, L1 and Hamilton-Jacobi, CS: Bayesian Compressive Sensing using Laplace Priors, Distilled Sensing
- Shiqian Ma in CS: A question in DSN, a talk, FPCA code, Near-Oracle Performance of Basis Pursuit Under Random Noise
- Derin Babacan in CS: Bayesian Compressive Sensing using Laplace Priors, Distilled Sensing
- Mark Iwen in CS: A Theoretical Analysis of Joint Manifolds and the release of the Ann Arbor Fast Fourier Transform
- Sina Jafarpour in CS: Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property
- Moshe Mishali, CS: Xampling, Ghost imaging via compressive sampling, Low-rank Matrix Recovery, Volume Anomaly Detection
- Nicolas Thierry-Mieg, CS: Compressed Genotyping, DNA Sudoku - Harnessing high throughput sequencing for multiplexed specimen analysis
- Hadi Zayyani in CS: calendar, CS Block Map-LMS Adaptive Filter, Democracy in Action, and more.
- Andriyan Suksmono in CS: Some open questions, Talk slides, Tutorials, Split Bregman Methods, Different coded apertures
- Rahul Garg in CS: Question about CS and MPI and RFIDs, Matrix Completion, CS in Traffic, Generalizations of the Linearized Bregman Method
- Andrea Montanari in CS: Question about CS and MPI and RFIDs, Matrix Completion, CS in Traffic, Generalizations of the Linearized Bregman Method
- Leon Palafox in CS: Question about CS and MPI and RFIDs, Matrix Completion, CS in Traffic, Generalizations of the Linearized Bregman Method
- David Skinner in CS: Question about CS and MPI and RFIDs, Matrix Completion, CS in Traffic, Generalizations of the Linearized Bregman Method
- Alex Conrad in CS: Fourier-transform Ghost Imaging, Message Passing Algorithms for Compressed Sensing
- Andy in CS: Inline hologram reconstruction with sparsity constraints, Reading the Book of Memory
- David in CS: An announcement and a plea. Videos of CS particle Filter results
- Georgos Tzagkarakis in CS: YALL1, fitness function Eureqa, Data Driven CS, Matlab, Python, NIPS 2009 papers, Bayesian statistics in AI and Social Sciences
- Michael in Imaging with Nature (part 3)
- Sanjay in CS: On the value of Distributed Sensor Networks, More on the Labview Reconstruction algorithm
- KFC from the Arxiv blog in CS: Wavefront Coding for Random Lens Imagers ?
- An anonymous reader in CS: Questions on IST, Audio CS and anoter GPR
- Emmanuel Candes and Terry Tao in the Dec. '08 issue of the IEEE Information Theory Society Newsletter
- Ori Katz, Yaron Bromberg, and Yaron Silberberg in their Ghost Imaging paper and
- Wenlin Gong and Shensheng Han in their paper.
Finally, the Duke Compressive Sensing Workshop contributed to much interest in the topic of compressive sensing this year. Thank you to the organizers of the meeting: Larry Carin, Gregory Arnold, David Brady, Mauro Maggioni, Xiaobai Sun, and Rebecca Willett for making it happen and for putting all the talks on video. The stats of the Compressive Sensing Video page has witnessed a bump since the meeting.
Thank you also to Gabriel Peyré , Laurent Cohen and Frédéric Barbaresco for organizing MIA09.
A big kudos goes to Eva Dyer and Mark Davenport for having attended and regularly updated the Rice Compressive Sensing site.
Thank you also to Gabriel Peyré , Laurent Cohen and Frédéric Barbaresco for organizing MIA09.
A big kudos goes to Eva Dyer and Mark Davenport for having attended and regularly updated the Rice Compressive Sensing site.
Thank you, y'all.
Tuesday, December 29, 2009
CS: Xampling, Ghost imaging via compressive sampling, Low-rank Matrix Recovery, Volume Anomaly Detection
We have finished a paper draft of our implementation of the modualted wideband converter in hardware. As promised, I send you the online posting
Xampling: Analog to Digital at Sub-Nyquist Rates by Moshe Mishali, Yonina Eldar, Oleg Dounaevsky, Eli Shoshan. The abstract reads:
We present a sub-Nyquist analog-to-digital converter of wideband inputs. Our circuit realizes the recently proposed modulated wideband converter, which is a flexible platform for sampling signals according to their actual bandwidth occupation. The theoretical work enables, for example, a sub-Nyquist wideband receiver, which has no prior information on the transmitter carrier positions. Our design supports input signals with 2 GHz Nyquist rate and 120 MHz spectrum occupancy, with arbitrary transmission frequencies. The sampling rate is as low as 280 MHz. To the best of our knowledge, this is the first reported wideband hardware for sub-Nyquist conversion. Furthermore, the modular design is proven to compete with state-of-the-art Nyquist ADCs in terms of resolution bits and full-scale range. We describe the various circuit design considerations, with an emphasis on the nonordinary challenges the converter introduces: mixing a signal with a multiple set of sinusoids, rather than a single local oscillator, and generation of highly-transient periodic waveforms, with transient intervals on the order of the Nyquist rate. A series of hardware experiments validates the design and demonstrate sub-Nyquist sampling.More about the circuit can be found here. This hardware is already listed in the compressive sensing hardware page. Thanks Moshiko !
also on arixv, here is: The effect of spatial transverse coherence property of a thermal source on Ghost imaging and Ghost imaging via compressive sampling by Wenlin Gong and Shensheng Han. The abstract reads:
Both ghost imaging (GI) and ghost imaging via compressive sampling (GICS) can nonlocally image an object. We report the influence of spatial transverse coherence property of a thermal source on GI and GICS and show that, using the same acquisition numbers, the signal-to-noise ratio (SNR) of images recovered by GI will be reduced while the quality of reconstructed images will be enhanced for GICS as the spatial transverse coherence lengths located on the object plane are decreased. Differences between GI and GICS, methods to further improve the quality and image extraction efficiency of GICS, and its potential applications are also discussed.On the interwebs, there is also: Tight Oracle Bounds for Low-rank Matrix Recovery from a Minimal Number of Random Measurements by Emmanuel Candes and Yaniv Plan. The abstract reads:
This paper presents several novel theoretical results regarding the recovery of a low-rank matrix from just a few measurements consisting of linear combinations of the matrix entries. We show that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, the recovery error from noisy data is within a constant of three targets: 1) the minimax risk, 2) an ‘oracle’ error that would be available if the column space of the matrix were known, and 3) a more adaptive ‘oracle’ error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding low-rank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP) introduced in [6] for vectors, and in [22] for matrices.
and finally a class presentation entitled: Volume Anomaly Detection by Joohyun Kim and Sooel Son. Maybe they should have looked at the Robust PCA technique instead ?
Monday, December 28, 2009
CS: Are We Hitting a Ceiling ?
This is the time of the year where one reflects on past achievements. Here is one and a question:
The figure above is from Radu Berinde and Piotr Indyk's Sequential Sparse Matching Pursuit paper, or in clearer terms (using rough numbers), the number of seconds it takes different solvers to reconstruct a vector with one million components, most of them being zeros :
Most new solvers: NESTA, APM, C-SALSA seem to have a similar speed as GPSR or are maybe an order of magnitude faster in terms of how fast one can reconstruct a sparse vector. We have gained 3 orders of magnitude over 3 years on the gold standard (?) of photography of 1 million pixels. Are we now going to see only slight incremental improvement over these results in the coming years ? I also agree that not all applications look for the same figure of merit and this is why we ought to have a benchmarking capability along the lines of the problems sets in SPARCO.
Saturday, December 26, 2009
Is Compressive Sensing a zero-th or a first order capability ?
After watching the inspiring video of Ramesh Raskar's Wishlist for Photography. I am wondering the following:
Is Compressive Sensing to be relegated to a first order sensing technique ?
In the presentation mentioned above, Ramesh shows off the BiDi Screen, A Thin, Depth-Sensing LCD for 3D Interaction using Light Fields ( a paper by Matthew Hirsch, Douglas Lanman, Henry Holtzman, Ramesh Raskar) the LCD screen can recognize what happens in front of it. One of the ways they locate the interacting hand with the screen is through the use of a coded aperture and they resort to a MURA configuration. As you know a MURA configuration means that one can invert it with a simple linear transformation. Why do that when you could use a better coding relying on some Toeplitz coded aperture ( as Roummel Marcia and Rebecca Willett do in Compressive Coded Aperture Superresolution Image Reconstruction ( the slides are here)) and obtain superresolution as shown here or 3D there ?
Because it works OK for the task at hand. We all know that coded aperture done with a compressive sensing twist would lead to superresolution and 3D capabilities but:
- the reconstruction would be much slower
- superresolution may not be needed
- there is a fear that a nonlinear reconstruction process would not fail gracefully.
I believe Richard Gordon mentioned the near same reasons in his recent paper (Gordon, R. (2010). Stop breast cancer now! Imagining imaging pathways towards search, destroy, cure and watchful waiting of premetastasis breast cancer) about the reason why ART never caught up compared to Filtered Back-Projection Reconstruction, a situation echoed by Xiaochuan Pan, Emil Sidky and Michael Vannier in the provocative title of their paper: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?.
Maybe this is just true, maybe what we are looking for is simple enough that having an OK measurement is better than the best possible reconstruction with fewer measurements. Then again, if one wants to use the data for other purposes like labeling and so forth, compressive sensing is the only framework that provides some guarantee that you have acquired all the information, not some part of it. Is that zero-th or first order, I am thinking about a round number, and you ?
If you watch the video all the way, Ramesh is looking forward to being able to label everything as well, can one do that with not enough information ?
Credit: Image taken from the BiDi Screen paper by Matthew Hirsch, Douglas Lanman, Henry Holtzman, Ramesh Raskar
Thursday, December 24, 2009
CS: Wishlist for Photography, Robust PCA code, Multiplicative noise removal, a Thesis, MRI, Sensor Networks, Bregman Methods, Template Matching
Today is Christmas eve, some of you are probably starting your gift finding journey, for others, it's time for writing a wishlist, for them here is an inspiring video of Ramesh Raskar's Wishlist for Photography.
By the way, on my wishlist, I have this LCD screen can recognise what happens in front of it. Am I gonna get it ?
It is pretty obvious that as Ramesh say, one wants more out of our photograpahic experience. Another proof of that is the ad-hoc datamining study performed on what people liked on Avatar [Thanks Aleks]. Can compressive sensing be part of that wave ?
[ By the way, the image shown here from Ramesh's presentation is from this publication: Fernald, R.D. (2006). "Casting a genetic light on the evolution of eyes." Science, 313: 1914-1918.PubMed and PDF ]
Jalal Fadili has a new toolbox out:
Multiplicative noise removal by L1-data fidelity on frame coefficients: Matlab toolbox
The Ph.D thesis of Wei Wang entitled Sparse signal recovery using sparse random projections is out. The abstract reads:
The problem of estimating a high-dimensional signal based on an incomplete set of noisy observations has broad applications. In remote sensing, network traffic measurement, and computational biology, the observation process makes it difficult or costly to obtain sample sizes larger than the ambient signal dimension. Signal recovery is in general intractable when the dimension of the signal is much larger than the number of observations. However, efficient recovery methods have been developed by imposing a sparsity constraint on the signal. There are different ways to impose sparsity, which has given rise to a diverse set of problems in sparse approximation, subset selection in regression, and graphical model selection.This thesis makes several contributions. First, we examine the role of sparsity in the measurement matrix, representing the linear observation process through which we sample the signal. We develop a fast algorithm for approximation of compressible signals based on sparse random projections, where the signal is assumed to be well-approximated by a sparse vector in an orthonormal transform. We propose a novel distributed algorithm based on sparse random projections that enables refinable approximation in large-scale sensor networks. Furthermore, we analyze the information-theoretic limits of the sparse recovery problem, and study the effect of using dense versus sparse measurement matrices. Our analysis reveals that there is a fundamental limit on how sparse we can make the measurements before the number of observations required for recovery increases significantly. Finally, we develop a general framework for deriving information-theoretic lower bounds for sparse recovery. We use these methods to obtain sharp characterizations of the fundamental limits of sparse signal recovery and sparse graphical model selection.
We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints. Our approach can be scaled up to high-resolution images by reductions to numerical mathematics primitives and parallelization on several levels. In a first study, designs are found that improve significantly on others chosen independently for each slice or drawn at random.
Energy efficient information collection in wireless sensor networks using adaptive compressive sensing by Rajib Kumar Rana, Chun Tung Chou, Salil Kanhere, Nirupama Bulusu, Wen Hu. The abstract reads:
We consider the problem of using Wireless Sensor Networks (WSNs) to measure the temporal-spatial field of some scalar physical quantities. Our goal is to obtain a sufficiently accurate approximation of the temporal-spatial field with as little energy as possible. We propose an adaptive algorithm, based on the recently developed theory of adaptive compressive sensing, to collect information from WSNs in an energy efficient manner. The key idea of the algorithm is to perform “projections” iteratively to maximise the amount of information gain per energy expenditure. We prove that this maximisation problem is NPhard and propose a number of heuristics to solve this problem. We evaluate the performance of our proposed algorithms using data from both simulation and an outdoor WSN testbed. The results show that our proposed algorithms are able to give a more accurate approximation of the temporal-spatial field for a given energy expenditure.
The Bregman Methods: Review and New Results by Wotao Yin. In the review, one can read the following at the very end:
More ... People use the words \Bregmanize" and \Bregmanized"
As for split Bregman, some people think it ought to have a different name (slide 41). From the Bregman website, we are told that:
The part below is two years old. A new page and software package is due middle of Feburary, 2010.
While we are on the subject of Split Bregman, we have the latest for UCLA in the form of Template Matching via L1 Minimization and Its Application to Hyperspectral Target Detection by Zhaohui Guo and Stanley Osher. The abstract reads:
Detecting and identifying targets or objects that are present in hyperspectral ground images are of great interest. Applications include land and environmental monitoring, mining, military, civil search-and-rescue operations, and so on. We propose and analyze an extremely simple and e±cient idea for template matching based on L1 minimization. The designed algorithm can be applied in hyperspectral target detection. Synthetic image data and real hyperspectral image (HSI) data are used to assess the performance. We demonstrate that this algorithm achieves excellent results with both high speed and accuracy by using Bregman iteration.
Happy Holidays y'all.
Wednesday, December 23, 2009
FAQ for compressive sensing ?, Dark Matter in Neuroscience, power laws, MIA'09 redux, a Terry Tao Video, Orders of Magnitude 2010
Is it time for a FAQ in compressive sensing ? Have you had people coming to you and ask you questions about compressive and wished there was a FAQ somewhere ?. Here is my first throw:
Please help me strengthen the list as I am sure you have been asked similar questions by friends, colleagues, students,....
The author of this small ($300) movie just got a deal from Hollywood for 30 million dollars.
How can that be ? If this is not an expression of the long tail argument, I don't know what is. In the same category I would put Arduino in the same category and sometimes think that some compressive sensing projects ought to follow the same route.
While we are on the subject of power laws and sparsity, back at MIA'09, David Field made the comment that we have a lots of neurons but that in effect, our brain is using very few of them. He called that problem the dark matter problem of neuroscience. I asked him offline if he had references on this and here is what he had to say:
Thanks David !
I was also very much impressed at how one can compute intrinsic dimension and entropy for images (and much more really) as presented in this paper.
What is compressive sensing to a linear algebra person ?
What is compressive sensing to an electrical engineer ?
What is compressive sensing to ....
How do you invert an underdetermined linear system in compressive sensing ?
Do you still have to sample at a high rate ?
How does one recognize that an under-determined linear equation can be solved with nonlinear solvers of compressive sensing ?
How do I recognize whether compressive sensing should be useful in my sensing system ?
How does the one pixel camera at rice work ?
Are there any other hardware implementing compressive sensing
Please help me strengthen the list as I am sure you have been asked similar questions by friends, colleagues, students,....
The author of this small ($300) movie just got a deal from Hollywood for 30 million dollars.
How can that be ? If this is not an expression of the long tail argument, I don't know what is. In the same category I would put Arduino in the same category and sometimes think that some compressive sensing projects ought to follow the same route.
While we are on the subject of power laws and sparsity, back at MIA'09, David Field made the comment that we have a lots of neurons but that in effect, our brain is using very few of them. He called that problem the dark matter problem of neuroscience. I asked him offline if he had references on this and here is what he had to say:
There is one I wrote with Bruno Olshausen.
BA Olshausen, DJ Field. What is the other 85% of V1 doing. - Problems in
Systems Neuroscience, 2004
This is on my website. A version was also published in Neural Computation.
http://redwood.psych.cornell.edu/
How close are we to understanding V1?
BA Olshausen, DJ Field - Neural Computation, 2005
This was extended by Shoham et al.
How silent is the brain: is there a “dark matter” problem in neuroscience?
S Shoham, DH O'Connor, R Segev - Journal of Comparative Physiology A: …, 2006
Thanks David !
I was also very much impressed at how one can compute intrinsic dimension and entropy for images (and much more really) as presented in this paper.
Chandler DM, and Field DJ. (2007). "Estimates of the Information Content and Dimensionality of Natural Scenes From Proximity Distributions." Journal of the Optical Society of America A, Vol. 24, Issue 4, pp. 922-941. [LINK] [PDF]Other power laws and sparse sampling observed this week on the interwebs include:
And while we are on the subject of MIA'09, here are additional presentations now available:
- Pierre Vandergheynst, In Praise of Sparsity, Diverse and Sometimes Surprising Applications in Image Processing.
- Sandrine Anthoine, Content-based image retrieval using sparse multiscale patches
- Joachim Weickert, Image Compression with Partial Differential Equations
- Thomas Batard, Heat Equations on Vector Bundles : Application to Images Regularization
- Yannick Berthoumieu, Texture Discrimination Based Upon Stochastic Modeling Francis Bach, Non-local Sparse Models for Image Restoration
I did not see Pierre's presentation but I remember him reminding us about the number Ramesh Raskar (in his ECTV'08 presentation entitled Computational Photography: Epsilon to Coded Imaging) about the type of cameras that grew the largest in recent years (no it's not the cameras in cellphones!). Since those cameras are low resolution but plentiful, I wonder if a cheap Compressive Sensing project should be attempted around that platform. I put the last slide of Pierre's presentation to get a sense of the amount of data we are about to deal with. Let us contrast this to some of the numbers that were large back in 2004 (also here). To makes things simple to understand, in 2004, a movie was taking 10 times as much memory as a movie produced three years before (2001). In 2009, people are producing roughly, 250 exabytes of images, or 5 millions times the amount of pictures needed for a movie produced five years before.
While we are talking about videos, Terry Tao just made a new presentation on Compressed Sensing at University of Oregon. It is here and will be featured shortly in the Compressive Sensing Videos page.
As you all know, I have called Compressive Sensing , the George Constanza 'Do the opposite sampling scheme'. Well, since today is December 23rd, Happy Festivus everybody!
While we are talking about videos, Terry Tao just made a new presentation on Compressed Sensing at University of Oregon. It is here and will be featured shortly in the Compressive Sensing Videos page.
As you all know, I have called Compressive Sensing , the George Constanza 'Do the opposite sampling scheme'. Well, since today is December 23rd, Happy Festivus everybody!
Saturday, December 19, 2009
CS: Robust PCA, C-SALSA is released, KAUST Jobs
Manureva, the subject of this beautiful song (written by Serge Gainsbourg), was the name of a boat that disappeared one day to be never seen again. Jim Gray happens to be mentioned in today's breakthrough paper since data-intensive computing was advocated by him as the fourth paradigm for Scientific discovery. From this blog entry:
The groundbreaking paper I am refering to is: Robust Principal Component Analysis? by Emmanuel Candes, Xiaodong Li, Yi Ma, and John Wright. The abstract reads:
Wow. I wonder how type of work will help in the dictionary learning ( see the presentation Dictionary learning for sparse representations using L1 minimization by Rémi Gribonval at MIA'09) and the calibration business. There is obviously some overlap with the background subtraction work of Volkan Cevher, Aswin Sankaranarayanan, Marco Duarte, Dikpal Reddy, Richard Baraniuk, and Rama Chellappa in Compressive sensing for background subtraction and now the question is, does this approach work well with compressed measurements ? I don't see why not as the compressed measurements are linear. I look forward to an implementation of this robust PCA algorithm.
While we are on the implementation of an algorithm, Mario Figueiredo just let me know of the release of C-SALSA mentioned earlier:
Finally, there are faculty OpeningS in Electrical Engineering at King Abdullah University of Science and Technology (KAUST). From the announcement:
The full announcement is here. It will be on the compressive sensing jobs page shortly.
Amongst the many things that Jim talked about was the “Fourth Paradigm in Science”, the fact that scientific research has transitioned from “experimental” (thousands of years ago), to “theoretical” (few hundreds years ago), to “computational” (last few decades), to “data-intensive” (today).
The groundbreaking paper I am refering to is: Robust Principal Component Analysis? by Emmanuel Candes, Xiaodong Li, Yi Ma, and John Wright. The abstract reads:
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the l_1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it oers a principled way of removing shadows and specularities in images of faces.
Wow. I wonder how type of work will help in the dictionary learning ( see the presentation Dictionary learning for sparse representations using L1 minimization by Rémi Gribonval at MIA'09) and the calibration business. There is obviously some overlap with the background subtraction work of Volkan Cevher, Aswin Sankaranarayanan, Marco Duarte, Dikpal Reddy, Richard Baraniuk, and Rama Chellappa in Compressive sensing for background subtraction and now the question is, does this approach work well with compressed measurements ? I don't see why not as the compressed measurements are linear. I look forward to an implementation of this robust PCA algorithm.
While we are on the implementation of an algorithm, Mario Figueiredo just let me know of the release of C-SALSA mentioned earlier:
Hi Igor,Thanks Mario ! I need to update the big picture in compressive sensing.
We have finally released our MATLAB implementation of C-SALSA,
which may be of interest to the CS community:
http://cascais.lx.it.pt/~mafonso/salsa.html
We have also released long versions of the papers describing SALSA and C-SALSA:
http://arxiv.org/abs/0910.4887
http://arxiv.org/abs/0912.3481
Finally, there are faculty OpeningS in Electrical Engineering at King Abdullah University of Science and Technology (KAUST). From the announcement:
KAUST invites applications for faculty positions in the area of Electrical Engineering. KAUST, located on the Red Sea coast of Saudi Arabia, is an international graduate-level research university dedicated to advancing science and technology through bold and collaborative research and to addressing challenges of regional and global significance, thereby serving the Kingdom, the region and the world. Newly opened in September 2009, KAUST is an independent and merit-based university and welcomes exceptional faculty, researchers and students from around the world. KAUST is committed to cutting-edge research in the globally significant areas of Energy, Water, and Food. In addition, KAUST emphasizes research on the discipline of Computational Science and Engineering serving as an enabling technology for all its research activities. KAUST invites applications for faculty position at all ranks in signal processing (with preference to bioinformatics, compressive sensing and/or image and video processing), information theory and coding (with preference to Genomics, and/or communication networks), Photonics and Optics (with preference to photonics materials and engineered photonic structures, metamaterials, plasmonics, integrated optics and optoelectronics, biophotonics, ultrafast photonics, and/or optical communications), and Electromagnetics (with preference to terahertz imaging, remote sensing, electromagnetic exploration, magneto-photonics, fundamentals of electromagnetic interaction, microwave photonics, and/or radiofrequency/microwave engineering). High priority will be given to the overall originality and promise of the candidate’s work rather than the candidate’s sub-area of specialization within Electrical Engineering.
The full announcement is here. It will be on the compressive sensing jobs page shortly.
Friday, December 18, 2009
Some presentations at MIA09
Some of the presentations of the MIA'09 meeting can be found below. I'll come back to the last post and some of these presentations later.
- Some Recent Advances in Image Representation for image segmentation, object class detection and image classification by Frederic Jurie
- Computational Information Geometry From Euclidean to flat Pythagorean geometries by Frank Nielsen
- Computation is the New Optics by Frédo Durand joint work with Anat Levin, Bill Freeman, Peter Sand, Tim Cho, Ce Liu, Antonio Torralba, Ted Adelson, and others
- Geometric Medians and Radar Target Detection by Guillaume Bouyt, Le Yang
- Periodic + Smooth Image Decomposition by Lionel Moisan
- Hypercomplex mathematical morphology by Jesús Angulo
- Multifractal stochastic processes to model images and textures by Pierre Chainais
- Wavelet-Based Statistical Estimators. Application to image restoration by Caroline Chaux
- Dictionary learning for sparse representations using L1 minimization by Rémi Gribonval, Karin Schnass
- Sparseness, invariance and independent coding in the visual system by David J. Field
- Proximal Splitting Methods in Signal Recovery by Patrick Louis Combettes
- Theory and algorithms for anisotropic finite elements with applications to images by Albert Cohen with Nira Dyn, Frederic Hecht and Jean-Marie Mirebeau.
Wednesday, December 16, 2009
MIA'09 and Richard Gordon's ART CT algorithm
During MIA09, I talked to some of you and it may have looked like I was ranting about the need to be closer to the instrumentation. Then I received an unexpected feedback from none other than Richard Gordon, one of the people who invented or devised ART for Computed Tomography. Dick sent me the following in the comment section of one of the "These Technologies Do Not Exist" entry. Here is what he has to say:
Dear Igor,
Compressive imaging was brought to my attention by:
Xiaochuan Pan, Emil Sidky and Michael Vannier (2009). Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?. Inverse Problems 25, 36p. #123009.
I suppose I’ve been using compressive imaging since I invented the ART algorithm. Its relationship to the Kaczmarz algorithm is discussed in:
Gordon, R. (2010). Stop breast cancer now! Imagining imaging pathways towards search, destroy, cure and watchful waiting of premetastasis breast cancer [invited]. In: Breast Cancer - A Lobar Disease. Eds.: T. Tot, Springer: in press.
which I’ll send on request to gordonr@cc.umanitoba.ca. You’ll find many ideas in there, including Wiener filtration, which might for example substantially improve single pixel imaging, although the first example of that was actually proposed and implemented by Eric Harth:
Harth, E. & E. Tzanakou (1974). Alopex: a stochastic method for determining visual receptive fields. Vision Res 14(12), 1475-1482.
Tzanakou, E., R. Michalak & E. Harth (1979). The Alopex process: visual receptive fields by response feedback. Biol Cybern 35(3), 161-174.
Harth, E. (1982). Windows of the Mind, Reflections on the Physical Basis of Consciousness. New York, William Morrow and Co.
My point in writing is to get some of the keen minds working on compressive imaging to come up with ways of making mass screening for premetastasis breast cancer at very low dose a reality. Yes, these technologies do not exist. They should. Thanks.
Yours, -Dick Gordon
--
Dr. Richard Gordon, Professor, Radiology, University of Manitoba GA216, HSC, 820 Sherbrook Street, Winnipeg R3A 1R9 Canada E-mail: DickGordonCan@gmail.com Skype: DickGordonCan, Second Life: Paleo Darwin, Fax: 1-(204) 787-2080
Embryo Physics Course: http://embryophysics.org/
http://bookswithwings.ca
http://www.umanitoba.ca/faculties/medicine/radiology/stafflist/rgordon.html
This is interesting for many reasons, Tuesday afternoon Patrick Louis Combettes was presenting an interesting tutorial on proximal splitting methods for signal recovery where we learned that we should not be using the wording Split Bregman method as these methods really are proximal splitting methods in the first place. And while I see much aggravation arising from this somewhat incendiary statement; from a compressive sensing point of view, however, this may become a quaint little souvenir as the recent Approximate Message Passing Algorithm seems to wash them all away. Anyway, during Patrick Louis Combettes' presentation, we learned that either ART or the Gerchberg Papoulis or the Gerchberg Saxton (GS) algorithm could not handle well solutions that required projections over convex sets (if I understood correctly). ART is the algorithm developed by Dick while the GS algorithm was originally used by the ghost imaging folks. As it turns out, the authors of the ghost imaging paper went ahead and used a more adequate solver such as GPSR that minimizes the l_1 norm and obtained much better results. Similarly, using any of the reconstruction solvers devised in the compressive sensing setting with random projections obtained from a potential compressive CT-scanner (however it may be implemented) will yield good results because these solvers make an assumption on the solution.
The issue at hand here is these new solvers solve a different problem, they solve an underdetermined linear system of equations with the prior knowledge that the solution is sparse or compressible in its representation. This is a much stronger statement than trying to solve an underdetermined system with some type regularization a la Tikhonov and hope for the best. Anyway, the article is worth reading from beginning to end, for those of you who have published in compressive sensing, you'll notice very eerily similar parallels. Let me take some "morceaux choisis" (preferred highlights):
On page 15
on page 17
The issue at hand here is these new solvers solve a different problem, they solve an underdetermined linear system of equations with the prior knowledge that the solution is sparse or compressible in its representation. This is a much stronger statement than trying to solve an underdetermined system with some type regularization a la Tikhonov and hope for the best. Anyway, the article is worth reading from beginning to end, for those of you who have published in compressive sensing, you'll notice very eerily similar parallels. Let me take some "morceaux choisis" (preferred highlights):
On page 15
Underdetermined Equations
My undergraduate degree was in Mathematics from the University of Chicago, where I had taken a course on linear algebra from Alberto P. Calderón (Christ et al., 1998). Thus I was keenly aware that in CT with ART we were taking the unusual step of solving for many more unknowns than we had equations. It may have been his belief that this was “impossible” (personal communication, 1974) that constrained Hounsfield’s use of his ART-like algorithm to the overdetermined case, i.e., many more projections than necessary, and this attitude may be the original cause of the high dose of CT to patients, because commercial medical scanners until recently have not deviated in concept from EMI’s lead in turning away from ART to FBP.
In terms of dose for equivalent image quality: ART with fewer views \lt \lt overdetermined ART is about the same as FBP. The consequences have been a significant increase in dose to populations due to CT (Huda, 2002; Linton & Mettler Jr, 2003; Dawson, 2004; Prokop, 2005; Bertell, Ehrle & Schmitz-Feuerhake, 2007; Colang, Killion & Vano, 2007). I did my own “back of the envelope“ calculation a few years ago, given in the next section. Breast CT may end up being the leader in correcting this situation, because no one has advocated any higher dose for 3D screening than the 2 mGy that has become the practice in ordinary mammography (Spelic, 2009).
on page 17
It will be interesting to compare this highly nonlinear pattern recognition approach with recently developed statistical inverse methods that generate alternative solutions to the equations from different samplings of matrices containing white noise (Kaipio & Somersalo, 2004).and
Focus on Breast Cancer Detection
I don’t recall exactly what got me started on the application of CT to breast imaging, but when I proposed to a large NIH audience of 500 or so people that we could screen women to detect small tumors, and it would take only a week of computing time per exam, I was greeted with derisive laughter (Gordon, 1976b).
I really wanted to put more of that paper in the entry, but it would be unfair to the rest of the paper, so go read it. I'll most probably talk more about this paper and MIA in the future.Thank you Dick for the paper, thank you Xiaochuan Pan, Emil Sidky and Michael Vannier for the other paper and thank you to Gabriel Peyré , Laurent Cohen and Frédéric Barbaresco for organizing MIA09.
Monday, December 14, 2009
CS: This week's long entry
It's Monday, before you gulp that morning cup of joe, you need to read this Fake Steve blog entry first otherwise you might be spilling all that coffee on your new LCD screen. Now that you are back, bada bing bada boom, here is the long post of the week. First, I'll be at MIA'09 and the hashtag for it on twitter will be #MIA09. Gabriel Peyre tells me there will be free wifi. woohoo.
Second, hat tip to the readers of the blog Daryl Lim, Ben Moran and Tanish Agrawal.
Daryl Lim (who incidentally is applying to graduate school in the US in the Fall 2010) let me know the following:
Second, hat tip to the readers of the blog Daryl Lim, Ben Moran and Tanish Agrawal.
Daryl Lim (who incidentally is applying to graduate school in the US in the Fall 2010) let me know the following:
Just letting you know that I found a couple errors in the Cosamp algorithm on Nuit Blanche (cosamp2.m in the MMA14 archive)
Before being fixed, it only worked for signals which were positive (like the test images) as a modulus function was performed in the least squares estimation step (which should not have been performed)
I have fixed it so it works for real valued signals now. Attached is the modified file.
By the way, your blog rocks.
The edited version is called cosamp3.m and is available here. Good call Daryl, now that you have been featured on Nuit Blanche, I am sure people will look at your graduate application with additional interest. By the way, I agree with you, Daryl, Nuit Blanche rocks!
Ben Moran let me know that the python code solving the Sudoku problem with l1-minimization is here. Thanks Ben!
Tanish Agrawal asked a very interesting question (that in my view is not being asked often):
As a part of my research work, I am working on compressed sensing for OFDM signals in UWB. I have been following some papers from Rice University DSP repository.
In particular, I have some doubts regarding the following paper: 'Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals' by J. Tropp et al. They discuss about implementing a random demodulator in the paper. In the part where they propose a matrix for the accumulator and dump sampler (page 5), it works by adding consecutive entries for which one needs to have all the samples. So, how is it working at sub-Nyquist rate when we still need to collect all the samples to integrate them?
Could you please take out sometime in helping me understand this?
To what I responded:
The number of eventual measurements, i.e. linear combination of samples is much less than what the Nyquist rate would indicate. I agree with you that the signal is being sampled at a high rate.
The issue that needs to be understood is that there are instrumentation for which acquiring those linear combination of samples is easier than acquiring each and everyone of these samples.
In this case, the random modulator is used to obtain signals that are in the GHz range. While sampling at that high rate (i.e. take each sample and store them) is not currently feasible, it is however possible to flip the polarity of the sensors at a fast enough rate (GHz) to implement the modulator proposed in the paper. In effect, in this instance, Compressive Sensing enables a new capability that could not exist otherwise. I think Tanish's question is a central one in that using the term subsampling or sub-Nyquist is an impediment for understanding that the new "samples" are actually linear combination of "normal" samples and there is nothing magic: We cannot have superresolution with compressive sensing if the hardware does have the capability to "touch" that superresolution in the first place. Because there are few systems where this subsampling is an obvious enabler, we need to think of these new technologies that have not been implemented yet. It is also important to have a living document on the current technologies implementing compressive sensing explaining how different they are from the current state of the art. Thanks Tanish !
And finally, David Smith produced a blog entry entitled: Because it's friday detecting Cylons ... with group testing. Let us recall that group testing is connected to compressive sensing.
Now here is what I found on the interwebs:
Performance analysis for sparse support recovery by Gongguo Tang, Arye Nehorai. The abstract reads:
The performance of estimating the common support for jointly sparse signals based on their projections onto lower-dimensional space is analyzed. Support recovery is formulated as a multiple-hypothesis testing problem. Both upper and lower bounds on the probability of error are derived for general measurement matrices, by using the Chernoff bound and Fano's inequality, respectively. The upper bound shows that the performance is determined by a quantity measuring the measurement matrix incoherence, while the lower bound reveals the importance of the total measurement gain. The lower bound is applied to derive the minimal number of samples needed for accurate direction-of-arrival (DOA) estimation for a sparse representation based algorithm. When applied to Gaussian measurement ensembles, these bounds give necessary and sufficient conditions for a vanishing probability of error for majority realizations of the measurement matrix. Our results offer surprising insights into sparse signal recovery. For example, as far as support recovery is concerned, the well-known bound in Compressive Sensing with the Gaussian measurement matrix is generally not sufficient unless the noise level is low. Our study provides an alternative performance measure, one that is natural and important in practice, for signal recovery in Compressive Sensing and other application areas exploiting signal sparsity.
On Reducing the Complexity of Tone-Reservation Based PAPR Reduction Techniques by Compressive Sensing by Eprahim B. Al-Safadi and Tareq Y. Al-Naffouri. The abstract reads:
In this paper, we describe a novel design of a Peak-to-Average-Power-Ratio (PAPR) reducing system, which exploits the relative temporal sparsity of Orthogonal Frequency Division Multiplexed (OFDM) signals to detect the positions and amplitudes of clipped peaks, by partial observation of their frequency content at the receiver. This approach uses recent advances in reconstruction of sparse signals from rank-deficient projections using convex programming collectively known as compressive sensing. Since previous work in the literature has focused on using the reserved tones as spectral support for optimum peak-reducing signals in the time-domain [5], the complexity at the transmitter was always a problem. In this work, we alternatively use extremely simple peak-reducing signals at the transmitter, then use the reserved tones to detect the peak-reducing signal at the receiver by a convex relaxation of an other-wise combinatorially prohibitive optimization problem. This in effect completely shifts the complexity to the receiver and drastically reduces it from a function of N (the number of subcarriers in the OFDM signal), to a function of m (the number of reserved tones) which is a small subset of N.
On the Exact Space Complexity of Sketching and Streaming Small Norms by Daniel M. Kane, Jelani Nelson, David P. Woodruff. The abstract reads:
We settle the 1-pass space complexity of (1 +- \epsilon)- approximating the Lp norm, for real p with 1 \lt p \lt 2, of a length-n vector updated in a length-m stream with updates to its coordinates. We assume the updates are integers in the range [-M;M]. In particular, we show the space required is O(\epsilon^(-2 )log(mM) + log log(n)) bits. Our result also holds for 0 \lt p \lt 1; although Lp is not a norm in this case, it remains a well-dened function. Our upper bound improves upon previous algorithms of [Indyk, JACM '06] and [Li, SODA '08]. This improvement comes from showing an improved derandomization of the Lp sketch of Indyk by using k-wise independence for small k, as opposed to using the heavy hammer of a generic pseudorandom generator against space-bounded computation such as Nisan's PRG. Our lower bound improves upon previous work of [Alon-Matias-Szegedy, JCSS '99] and [Woodru, SODA '04], and is based on showing a direct sum property for the 1-way communication of the gap-Hamming problem.
and after compressed sensing for vectors, matrices (low-rank), we now have compressed sensing for multilinear forms’ (tensor variables): Multiarray signal processing: tensor decomposition meets compressed sensing by Lek-Heng Lim and Pierre Comon. The abstract reads:
We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniqueness of a best rank-r approximation of a tensor. In particular, we obtain a computationally feasible variant of Kruskal’s uniqueness condition with coherence as a proxy for k-rank. We treat sparsest recovery and lowest-rank recovery problems in a uniform fashion by considering Schatten and nuclear norms of tensors of arbitrary order and dictionaries that comprise a continuum of uncountably many atoms.
Lek-Heng Lim made two presentations at NIPS 09, Part I: "Rank aggregation via Hodge theory," and Part II: "Rank aggregation via nuclear norm minimization.
I also found three presentations:
- Martin Vetterli, Sparse Sampling: Theory Algorithms and Applications.
- Alexander Balinsky, On semi‐supervised learning and sparsity. Applications to the colourisation problem of natural images
- Gabriel Peyre, Sparse geometric processing of images, the compressed sensiing part starts at slide 64.
From the Rice repository, I seem to have missed the following:
Modified Compressive Sensing for Real-time Dynamic MR Imaging by Wei Lu, Namrata Vaswani. The asbtract reads:
In this work, we propose algorithms to recursively and causally reconstruct a sequence of natural images from a reduced number of linear projection measurements taken in a domain that is “incoherent” with respect to the image’s sparsity basis (typically wavelet) and demonstrate their application in real-timeMR image reconstruction. For a static version of the above problem, Compressed Sensing (CS) provides a provably exact and computationally efficient solution. But most existing solutions for the actual problem are either offline and non-causal or cannot compute an exact reconstruction (for truly sparse signal sequences), except using as many measurements as those needed for CS. The key idea of our proposed solution (modified-CS) is to design a modification of CS when a part of the support set is known (available from reconstructing the previous image). We demonstrate the exact reconstruction property of modified-CS on full-size image sequences using much fewer measurements than those required for CS. Greatly improved performance over existing work is demonstrated for approximately sparse signals or noisy measurements.
Subscribe to:
Posts (Atom)