Pages

Tuesday, January 31, 2012

Structured Life

Mark Plumbley just reminded me that on top of Zhilin's selection of papers related to CS at ICASSP 2012, there was also a session he is co-organizing on Analysis Sparsity. It turns out that at MIA2012, Miki Elad's presentation was also on an analysis based method featured in The Analysis Sparse Model - Definition, Pursuit, Dictionary Learning, and Beyond and I was bugged because of this slide:



or this slide:




I was bugged, because clearly this analysis approach seems to provide the equivalent of the discretization of an operator, an operator that is at play in this scene. It turns out, it really is not a coincidence as witnessed in this paper featured at ICASSP: Physics -Driven Structured CoSparse Modeling for Source Localization by Sangnam Nam and Remi Gribonval. The abstract reads:

Cosparse modeling is a recent alternative to sparse modeling, where the notion of dictionary is replaced by that of an analysis operator. When a known analysis operator is well adapted to describe the signals of interest, the model and associated algorithms can be used to solve inverse problems. Here we show how to derive an operator to model certain classes of signals that satisfy physical laws, such as the heat equation or the wave equation. We illustrate the approach on an acoustic inverse problem with a toy model of wave propagation and discuss its potential extensions and the challenges it raises.

also from this presentation by Remi Gribonval.


In other words, maybe structured sparsity does not need to be adhoc after all, maybe using an analysis approach we could really figure out how Life works and wow biologists instead of just trying to please them.


Thanks Mark for the reminder.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, January 30, 2012

Around the blogs in 80 hours






Today we have a few blog entries of interest, enjoy!

Before going on, perhaps it’s worth briefly reviewing who is hurt by the publication of flawed research. It’s not a victimless crime. Here are some of the malign consequences:
- Wasted time and resources spent by researchers trying to replicate non-findings and chasing down dead ends.
- Fake science news bumping real science news off the front page.
- When the errors and scandals come to light, a decline in the prestige of higher-quality scientific work.
- Slower progress of science, delaying deeper understanding of psychology, medicine, and other topics that we deem important enough to deserve large public research efforts..."

Saturday, January 28, 2012

It's quite simply, the stuff of Life...

As I was watching the excellent video presentation on Path coding penalties for directed acyclic graphs by Julien Mairal who uses SPAMS for metabolic network detection ( SPAMS -SPArse Modeling Software- by Julien MairalFrancis BachJean Ponce,Guillermo Sapiro is listed on the Matrix Factorization Jungle Page). I was reminded of the fact that, at some point, there needs to be a serious discussion on the connection between regularization techniques, structured sparsity and their roots in the physical world. I wrote two entries on the subject a month ago:
This is not the first time that metabolic networks have been mentioned here:( Instances of Null Spaces: Can Compressive Sensing Help Study Non Steady State Metabolic Networks ? ). But what really triggered this entry is a salient question by an audience member at the very end of the talk. At that point, Julien has to explain if somehow his structured sparsity would remove loops. It turns out that, in metabolic systems like the ones Julien explores, loops are quite simply the stuff of Life. Take for instance the Krebs cycle:






In other words, choosing an ad-hoc regularization will impact your discovery process. TV regularization may be fine for getting good looking pictures and so we are OK with the fact that it is ad-hoc, but if we are to venture outside of that "image processing" garden of Lena and her sisters, we need to think hard about the connection between regularization and its connection to physical world. As mentioned in the entries listed above, Adrian Bejan's work seems to be a worthwhile, if empirical, path in that direction. I am sure there are others, but that discussion needs to occur.



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, January 27, 2012

Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries

Using random projections to control the information flow from layer to layer in dictionary learning, this is what  Zhen James Xiang seems to be saying in his NIPS11 presentation on Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries. The attendant  MATLAB Toolbox is here and is featured in the Matrix Factorization Jungle Page. The paper is: Learning sparse representations of high dimensional data on large scale dictionaries by Zhen James Xiang , Hao XuPeter Ramadge. The abstract reads:

Learning sparse representations on data adaptive dictionaries is a state-of-the-art method for modeling data. But when the dictionary is large and the data dimension is high, it is a computationally challenging problem. We explore three aspects of the problem. First, we derive new, greatly improved screening tests that quickly identify codewords that are guaranteed to have zero weights. Second, we study the properties of random projections in the context of learning sparse representations. Finally, we develop a hierarchical framework that uses incremental random projections and screening to learn, in small stages, a hierarchically structured dictionary for sparse representations. Empirical results show that our framework can learn informative hierarchical sparse representations more efficiently.
 And some Supplemental material.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

GRASTA: Grassmannian Robust Adaptive Subspace Tracking Algorithm Implementation

Laura Balzano and Jun He let me know that they have posted information and code for the GRASTA online robust-PCA algorithm which can be applied to both Matrix Completion and Robust PCA. It is also now listed in the Matrix Factorization Jungle Page. The webpage starts with:

GRASTA
This webpage introduces an efficient online algorithm GRASTA ( Grassmannian Robust Adaptive Subspace Tracking Algorithm ) for low rank subspace tracking, which is robust to both highly incomplete information and sparse corruption by outliers.
Our work is a robust counterpart of GROUSE which is very efficient for low rank subspace tracking from highly incomplete information. Though the two algorithms share the same characteristic - stochastic gradient descent on Grassmannian - GRASTA incorporates the augmented Lagrangian of l1-norm loss function into the Grassmannian optimization framework to alleviate the corruption by outliers in the subspace update at each gradient step.


As an online algorithm, GRASTA can estimate and track non-stationary subspaces when the streaming data vectors are corrupted with outliers. We apply GRASTA to the problems of robust matrix completion and real-time separation of background from foreground in video. In this second application, we show that GRASTA performs high-quality separation of moving objects from background at exceptional speeds: In one popular benchmark video example, GRASTA achieves a rate of 57 frames per second, even when run in MATLAB on a personal laptop.


We have posted our GRASTA paper at arXiv. For more detailed information please refer to our paper. If you have some questions on our work, please email us or feel free to visit our websites: Jun He and Laura Balzano.


Thanks Laura and Jun...


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, January 26, 2012

Random Feedbacks

Phil Schniter sent me the following yesterday following a link to the recent arxiv preprint: Approximate Message Passing under Finite Alphabet Constraints by Andreas Muller, Dino Sejdinovic, Robert Piechocki

"..hi igor,
Regarding the paper "Approximate Message Passing under Finite Alphabet Constraints", it's a great idea to exploit such prior info when available. for fairness, your readers may be be interested to hear that finite-alphabet priors have been part of the GAMPmatlab package (http://gampmatlab.wikia.com/wiki/Generalized_Approximate_Message_Passing) since its inception. moreover, such priors have been used with GAMP to do joint decoding and channel estimation in our work http://www2.ece.ohio-state.edu/~schniter/pdf/jstsp11_ofdm.pdf.
cheers,
phil..."
Thanks Phil. You all probably remembered when Phil schooled me in alphabet issues ( A Small Q&A with Phil Schniter on TurboGAMP. )

I came across this very nice example in the Python based Scikits Learn package of a Compressive sensing: tomography reconstruction with L1 prior (Lasso). Let us recall that another solver written in Python includes the ASPICS toolbox.

Talking about Phil and ASPICS reminded that I drew a graph of the recent events that occurred in 2011 in compressive sensing, both Phil and the folks behind ASPICS (Florent Krzakala, Marc MézardFrançois SaussetYifan Sun, Lenka Zdeborová) played a no small part in these improvements. I don't know if it came out right but here it is:


Anna Gilbert has a new entries on her two recent lectures on compressive sensing.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, January 25, 2012

Compressive Sensing This Week

Here are some of the papers that showed up on my radar screen, enjoy!



Next we have a few open papers:

We consider a linear stochastic bandit problem where the dimension $K$ of the unknown parameter $\theta$ is larger than the sampling budget $n$. In such cases, it is in general impossible to derive sub-linear regret bounds since usual linear bandit algorithms have a regret in $O(K\sqrt{n})$. In this paper we assume that $\theta$ is $S-$sparse, i.e.~has at most $S-$non-zero components, and that the space of arms is the unit ball for the $||.||_2$ norm. We combine ideas from Compressed Sensing and Bandit Theory and derive algorithms with regret bounds in $O(S\sqrt{n})$.

ADAPTIVE COMPRESSED SENSING FOR VIDEO ACQUISITION by Hassan Mansour and Ozgur Yılmaz. The abstract reads:
In this paper, we propose an adaptive compressed sensing scheme that utilizes a support estimate to focus the measurements on the large valued coefficients of a compressible signal. We embed a “sparse-filtering” stage into the measurement matrix by weighting down the contribution of signal coefficients that are outside the support estimate. We present an application which can benefit from the proposed sampling scheme, namely, video compressive acquisition. We demonstrate that our proposed adaptive CS scheme results in a significant improvement in reconstruction quality compared with standard CS as well as adaptive recovery using weighted `1 minimization.


Based on the sparse property of the cyclic autocorrelation in the cyclic frequencies domain, this paper proposes a new blind spectrum sensing method which uses the compressed sensing technique in order to detect free bands in the radio spectrum. This new sensing method that presents a relative low complexity has the particularity to perform blind and robust detection with only few samples (short observation time) and without any knowledge about the cyclic frequency of the signal, in contrary to cyclostationary detection methods that are not robust when the sample size is small and might need some information about the signal in order to detect. ROC curves obtained by simulation show the superiority of the new proposed technique over cyclostationary detection under the same conditions, particularly the same observation time.

Approximate Message Passing under Finite Alphabet Constraints by Andreas Muller, Dino Sejdinovic, Robert PiechockiThe abstract reads:
In this paper we consider Basis Pursuit De-Noising (BPDN) problems in which the sparse original signal is drawn from a finite alphabet. To solve this problem we propose an iterative message passing algorithm, which capitalises not only on the sparsity but by means of a prior distribution also on the discrete nature of the original signal. In our numerical experiments we test this algorithm in combination with a Rademacher measurement matrix and a measurement matrix derived from the random demodulator, which enables compressive sampling of analogue signals. Our results show in both cases significant performance gains over a linear programming based approach to the considered BPDN problem. We also compare the proposed algorithm to a similar message passing based algorithm without prior knowledge and observe an even larger performance improvement.


Compressive Acquisition of Dynamic Scenes  by  Aswin C Sankaranarayanan, Pavan K Turaga, Rama Chellappa, Richard G Baraniuk. The abstract reads: 

Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach with a range of experiments involving both video recovery, sensing hyper-spectral data, and classification of dynamic scenes from compressive data. Together, these applications demonstrate the effectiveness of the approach.


On Detection-Directed Estimation Approach On Detection-Directed Estimation Approach for Noisy Compressive Sensing  by  Jaewook Kang, Heung-No Lee, Kiseon Kim. The abstract reads:
In this paper, we investigate a Bayesian sparse reconstruction algorithm called compressive sensing via Bayesian support detection (CS-BSD). This algorithm is quite robust against measurement noise and achieves the performance of a minimum mean square error (MMSE) estimator that has support knowledge beyond a certain SNR threshold. The key idea behind CS-BSD is that reconstruction takes a detection-directed estimation structure consisting of two parts: support detection and signal value estimation. Belief propagation (BP) and a Bayesian hypothesis test perform support detection, and an MMSE estimator finds the signal values belonging to the support set. CS-BSD converges faster than other BP-based algorithms, and it can be converted to a parallel architecture to become much faster. Numerical results are provided to verify the superiority of CS-BSD compared to recent algorithms.


On the Lagrangian Biduality of Sparsity Minimization Problems  by  Dheeraj Singaraju, Ehsan Elhamifar, Roberto Tron, Allen Y. Yang, S. Shankar Sastry. The abstract reads:
Recent results in Compressive Sensing have shown that, under certain conditions, the solution to an underdetermined system of linear equations with sparsity-based regularization can be accurately recovered by solving convex relaxations of the original problem. In this work, we present a novel primal-dual analysis on a class of sparsity minimization problems. We show that the Lagrangian bidual (i.e., the Lagrangian dual of the Lagrangian dual) of the sparsity minimization problems can be used to derive interesting convex relaxations: the bidual of the $\ell_0$-minimization problem is the $\ell_1$-minimization problem; and the bidual of the $\ell_{0,1}$-minimization problem for enforcing group sparsity on structured data is the $\ell_{1,\infty}$-minimization problem. The analysis provides a means to compute per-instance non-trivial lower bounds on the (group) sparsity of the desired solutions. In a real-world application, the bidual relaxation improves the performance of a sparsity-based classification framework applied to robust face recognition.


A General Framework of Dual Certificate Analysis for Structured Sparse Recovery Problems by Cun-Hui Zhang, Tong Zhang. The abstract reads:
This paper develops a general theoretical framework to analyze structured sparse recovery problems using the notation of dual certificate. Although certain aspects of the dual certificate idea have already been used in some previous work, due to the lack of a general and coherent theory, the analysis has so far only been carried out in limited scopes for specific problems. In this context the current paper makes two contributions. First, we introduce a general definition of dual certificate, which we then use to develop a unified theory of sparse recovery analysis for convex programming. Second, we present a class of structured sparsity regularization called structured Lasso for which calculations can be readily performed under our theoretical framework. This new theory includes many seemingly loosely related previous work as special cases; it also implies new results that improve existing ones even for standard formulations such as L1 regularization.


Compressed Beamforming Applied to B-Mode Ultrasound Imaging  by  Noam Wagner, Yonina C. Eldar, Arie Feuer, Zvi Friedman.The abstract reads:
Emerging sonography techniques often imply increasing in the number of transducer elements involved in the imaging process. Consequently, larger amounts of data must be acquired and processed by the beamformer. The significant growth in the amounts of data effects both machinery size and power consumption. Within the classical sampling framework, state of the art systems reduce processing rates by exploiting the bandpass bandwidth of the detected signals. It has been recently shown, that a much more significant sample-rate reduction may be obtained, by treating ultrasound signals within the Finite Rate of Innovation framework. These ideas follow the spirit of Xampling, which combines classic methods from sampling theory with recent developments in Compressed Sensing. Applying such low-rate sampling schemes to individual transducer elements, which detect energy reflected from biological tissues, is limited by the noisy nature of the signals. This often results in erroneous parameter extraction, bringing forward the need to enhance the SNR of the low-rate samples. In our work, we manage to achieve such SNR enhancement, by beamforming the sub-Nyquist samples obtained from multiple elements. We refer to this process as "compressed beamforming". Applying it to cardiac ultrasound data, we successfully image macroscopic perturbations, while achieving a nearly eight-fold reduction in sample-rate, compared to standard techniques.

Reconstruction of HARDI using Compressed Sensing and its Application to Contrast HARDI by Sudipto Dolui, Ivan C. Salgado Patarroyo, Oleg V. Michailovich., Yogesh Rathi. The abstract reads:
High angular resolution diffusion imaging (HARDI) is known to excel in delineating multiple diffusion flows through a given location within the white matter of the brain. Unfortunately, many current methods of implementation of HARDI require collecting a relatively large number of diffusion-encoded images, which is in turn translated in prohibitively long acquisition times. As a possible solution to this problem, one can undersample HARDI data by using fewer diffusion-encoding gradients than it is prescribed by the classical sampling theory, while exploiting the tools of compressed sensing (CS). Accordingly, the goal of the present paper is twofold. First, the paper presents a novel CS-based framework for the reconstruction of HARDI data using a reduced set of diffusion-encoding gradients. As opposed to similar studies reported in the literature, the proposed method has been optimized for the Rician statistics of measurement noises, which are known to be prevalent in HARDI, and in fact, in MRI in general. Second, we introduce the concept of rotational invariant Fourier signatures (RIFS), and show how they can be used to generate a composite HARDI contrast, which we refer to as colour-HARDI (cHARDI). Finally, via a series of experiments with both simulated and in vivo MRI data, we demonstrate that the quality and informativeness of the proposed contrast deteriorates little, when used in conjunction with the proposed CS-based reconstruction framework. Thus the present work proposes a way to improve the time efficiency of HARDI, and shows its application to the computation of a new HARDI-based contrast which has a potential to improve the clinical value of this important imaging modality.

Sparse Binary Matrices of LDPC codes for Compressed Sensing by Weizhi Lu , Kidiyo Kpalma , Joseph Ronsin. The abstract reads:

Compressed sensing shows that one undetermined measurement matrix can losslessly compress sparse signals if this matrix satisfies Restricted Isometry Property (RIP). However, in practice there are still no explicit approaches to construct such matrices. Gaussian matrices and Fourier matrices are first proved satisfying RIP with high probabilities. Recently, sparse random binary matrices with lower computation load also expose comparable performance with Gaussian matrices. But they are all constructed randomly, and unstable in orthogonality. In this paper, inspired by these observations, we propose to construct structured sparse binary matrices which are stable in orthogonality. The solution lies in the algorithms that construct parity-check matrices of low-density parity-check (LDPC) codes. Experiments verify that proposed matrices significantly outperform aforementioned three types of matrices. And significantly, for this type of matrices with a given size, the optimal matrix for compressed sensing can be approximated and constructed according to some rules.


Parallel imaging technique using localized gradients (PatLoc) reconstruction using compressed sensing (CS) 
by F-H. Lin, P. Vesanen, T. Witzel, R. Ilmoniemi, and J. Hennig. The abstract reads:
Parallel acquisition with localized gradient (PatLoc) is a new approach to further increase degrees of freedom in spatial encoding by using a combination of surface gradient coils [1]. Due to non-bijective encoding, PatLoc requires a radio-frequency coil array to uniquely localize the magnetization using the parallel MRI approach [1]. Preliminary results of PatLoc image reconstructions focused on the accelerated acquisitions not exceeding the number of RF coil in the array [2,3,4,5]. Recently, based on the assumption of image sparsity, compressed sensing (CS) has been proposed to achieve MRI acceleration using random sampling k-space and reconstructing vastly reduced data using a nonlinear algorithm [6]. In this study, we investigate the feasibility of reconstructing highly accelerated PatLoc images using CS. Specifically, we hypothesize that PatLoc can provide better reconstructed images compared to traditional orthogonal linear gradient systems because of higher degree of freedom in spatial encoding.

Compressive sensing (CS), a new sampling paradigm, has recently found several applications in wireless sensor networks (WSNs). In this paper, we investigate the design of novel sensing matrices which lead to good expected-case performance - a typical performance indicator in practice - rather than the conventional worst-case performance that is usually employed when assessing CS applications. In particular, we show that tight frames perform much better than the common CS Gaussian matrices in terms of the reconstruction average mean squared error (MSE). We also showcase the benefits of tight frames in two WSN applications, which involve: i) robustness to data sample losses; and ii) reduction of the communication cost.
Spatially sparse source cluster modeling by compressive neuromagnetic tomography by Wei-Tang Chang, Aapo Nummenmaa, Jen-Chuen Hsieh, Fa-Hsuan Lin. The abstract reads:
Magnetoencephalography enables non-invasive detection of weak cerebral magnetic fields by utilizing superconducting quantum interference devices (SQUIDs). Solving the MEG inverse problem requires reconstructing the locations and orientations of the underlying neuronal current sources based on the extracranial measurements. Most inverse problem solvers explicitly favor either spatially more focal or diffuse current source patterns. Naturally, in a situation where both focal and spatially extended sources are present, such reconstruction methods may yield inaccurate estimates. To address this problem, we propose a novel ComprEssive Neuromagnetic Tomography (CENT) method based on the assumption that the current sources are compressible. The compressibility is quantified by the joint sparsity of the source representation in the standard source space and in a transformed domain. The purpose of the transformation sparsity constraint is to incorporate local spatial structure adaptively by exploiting the natural redundancy of the source configurations in the transform domain. By combining these complementary constraints of standard and transformed domain sparsity we obtain source estimates, which are not only locally smooth and regular but also form globally separable clusters. In this work, we use the ℓ1-norm as a measure of sparsity and convex optimization to yield compressive estimates in a computationally tractable manner. We study the Laplacian matrix (CENTL) and spherical wavelets (CENTW) as alternatives for the transformation in the compression constraint. In addition to the two prior constraints on the sources, we control the discrepancy between the modeled and measured data by restricting the power of residual error below a specified value. The results show that both CENTL and CENTW are capable of producing robust spatially regular source estimates with high computational efficiency. For simulated sources of focal, diffuse, or combined types, the CENT method shows better accuracy on estimating the source locations and spatial extents than the minimumℓ1-norm or minimum ℓ2-norm constrained inverse solutions. Different transformations yield different benefits: By utilizing CENT with the Laplacian matrix it is possible to suppress physiologically atypical activations extending across two opposite banks of a deep sulcus. With the spherical wavelet transform CENT can improve the detection of two nearby yet not directly connected sources. As demonstrated by simulations, CENT is capable of reflecting the spatial extent for both focal and spatially extended current sources. The analysis of in vivo MEG data by CENT produces less physiologically inconsistent “clutter” current sources in somatosensory and auditory MEG measurements. Overall, the CENT method is demonstrated to be a promising tool for adaptive modeling of distributed neuronal currents associated with cognitive tasks.


The following papers are behind a paywall:


Image Credit: NASA/JPL/Space Science Institute, W00071921.jpg was taken on January 22, 2012 and received on Earth January 24, 2012. The camera was pointing toward SATURN at approximately 2,479,787 kilometers away, and the image was taken using the MT2 and CL2 filters.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, January 24, 2012

Around the blogs in 80 hours, prizes and fame.

On metaoptmize, there is a question about compressive sensing recovery. Bob has some interesting 'misses' findings on audio classification with music samples, watch out though that blog entry could become addictive! Dirk talks about Sparse recovery of multidimensional signal with Kronecker products. Danny has updated his post on Hyperspectral imaging using GraphLab. Randy features a press release by a company that clearly thinks about the X-Prize tricorder and finally Greg talks about this week's MIT course where the students build a DIY Phased Array Radar using pegboard and wi-fi antennas. He has more in this entry.

Some folks received some prizes directly or indirectly thanks to their contribution to compressive sensing:
  • The Crafoord prize was handed out to some people connected to compressive sensing (Terry Tao)
Today the Royal Swedish Academy of Sciences announced the four winners of the 2012 Crafoord Prize, an annual award that rotates between the disciplines of astronomy, mathematics, geosciences, biosciences, and arthritis research. This year's honorees came from mathematics and astronomy, fields last recognized in 2008.
The two awardees in mathematics were Jean Bourgain, a Belgian mathematician now working at the Institute for Advanced Study (IAS) in Princeton, New Jersey, and Terence Tao, an Australian-American mathematician at the University of California, Los Angeles (UCLA). Both Bourgain and Tao previously won the Fields Medal, often considered the equivalent of the Nobel Prize in mathematics (Bourgain in 1994 and Tao in 2006).


And finally Nuit Blanche did not get no prizes but was mentioned in the comment section of an op-ed of the New York Times, yes and soon fame and fortune, world domination at last, muuuuaaaaaaaahhhhhhhh.



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, January 23, 2012

The Bregman Iterative Procedure


Do not confuse this with the linearized Bregman method! 


so says the Bregman Iterative Procedure site: where is featured this paper: Error Forgetting of Bregman Iteration. by  Wotao Yin and Stanley Osher. The abstract reads:


This short article analyzes an interesting property of the Bregman iterative procedure, which is
equivalent to the augmented Lagrangian method, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax = b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x)+ 12kAx−bkk22, where bk is iteratively updated. In practice, the subproblems can be solved at relatively low accuracy. Let w k denote the numerical error at iteration k. If all w k are sufficiently small so that Bregman iteration identifies the optimal face, then on the optimal face, Bregman iteration enjoys an interesting error-forgetting property: the distance between the current point ˉx k and the optimal solution set X is bounded by kwk+1− wkk, independent of the numerical errors at previous iterations. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, l1-minimization. The error-forgetting property is unique to piece-wise linear functions (i.e., polyhedral functions) J(x), and the results of this article appears to new to the literature of the augmented Lagrangian method.
The attendant code for this procedure can be found here.





  Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, January 22, 2012

sFFT: Sparse Fast Fourier Transform

Remember the new FFT algorithm out of MIT that uses the sparsity of the signal to perform a Faster FFT ? Well, the authors (Dina KatabiPiotr IndykHaitham HassaniehEric Price), just set up a website that will  now eventually hosts an implementation of it. ( Piotr  tells me it should be up after the news storm has settled down). The KIHP sFFT page currently starts with:

IntroductionWe consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal pro- cessing, notably Gaussian and Dolph-Chebyshev filters. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.

Algorithm

Algorithm

Complexity

Sparsity Range

sFFT 1.0 (Non-Iterative Alghorithm)::
sFFT 2.0 (sFFT 1.0 + Heuristic)::
sFFT 3.0 (Exact k Sparse Algorithm)::
sFFT 4.0 (General k Sparse Algorithm)::
Lower Bound::

The figure of deep interest is here I think with a comparison with the Fastest Fourier Transform in the West. (FFTW)




The paper presented at SODA is Simple and Practical Algorithm for Sparse Fourier Transform by  Haitham Hassanieh ,  Piotr Indyk  , Dina Katabi, and  Eric Price. The abstract reads:

 . We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating “large” coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in “one shot”, in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, January 21, 2012

Video: David Brady's "Coding for Multiplex Optical Imagers" lecture

SPARS11 took place 7 months ago but I still enjoy the plenary lectures on video. Today, I re-watched David Brady's presentation on Coding for Multiplex Optical Imagers. It's just awesome. I tried to put the video on Youtube but it is too long. I tried Vimeo but I have uploading issues. Oh well, at least it is in a wmv format (if anyone uploads them somewhere on a player, please let me know, I'll link to the video and your site will be duly acknowledged). 

 The other plenary lectures (they last about 45 minutes) are listed below:

Thanks to Coralia CartisMike DaviesJared Tanner and Bubacarr Bah for organizing the meeting, video taping these lectures and putting them on the site.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, January 20, 2012

Robust Denoise This!

Forget Lena

Julien just mentioned to me some extraordinary shots taken yesterday at the Fukushima plant that could be useful for some of you over the week-end (if you are interested): Namely, clean up video footages from reactor 5 of the Fukushima Daiichi plant taken yesterday.
In the following handout by TEPCO entitled: Trial examination at Fukushima Daiichi Nuclear Power Station Unit 5 for inner inspection of PCV of Fukushima Daiichi Nuclear Power Unit 2, one gets an idea of the examination work that is currently undertaken at Fukushima in order to investigate the primary containment vessel of  reactor 5. Less than a year later, we are getting videos to watch, wow, this is simply amazing in terms of transparency. Of interest to the image processing and denoising folks, is the amount of noise from all kinds on these photos and videos. Specifically, radiation hits the FPA of the camera pretty hard. I wonder how any of the current Robust PCA (or similar work) that Nuit Blanche often features behave in such an uncontrolled benchmark. 








I am tempted to see how some denoising and other inpainting could make these videos nicer
of interest eventually, is the ability to have future vision test inside the reactor enabled by these denoising  techniques.


Most information (including the original videos come from this page but they won't be up for too long, get them while you can):

"...Investigation inside of Primary Containment Vessel, Unit 2, Fukushima Daiichi NPS
  • The digest version
  • (Video on January 19, 2012)
*Video File For Download(This file will expire after 7 days.)
  • (Video on January 19, 2012)
  • (Video on January 19, 2012)
  • (Video on January 19, 2012)
  • (Video on January 19, 2012)

  • (pictured on January 18, 2012)
  • (pictured on January 18, 2012).
  • (pictured on January 19, 2012)
  • (pictured on January 19, 2012)
  • (pictured on January 19, 2012)
  • (pictured on January 19, 2012)
  • (pictured on January 19, 2012)
  • (pictured on January 19, 2012)
  • (pictured on January 19, 2012)

  • ."
Thanks Julien!