Pages

Tuesday, March 02, 2010

CS: IADM_NNLS code, data streams research, where to go, RESCUME, Optical imaging using binary sensors. Multiarray Signal Processing, CS -SD-OCT, blogs

Junfeng Yang just let me know of the availability of the IADM_NNLS code: IADM_NNLS stands for Inexact Alternating Direction Method for Nuclear Norm regularized Least Squares problems. It solves the following problems:




Currently, the code applies to three types of A-operators:

  • (1) Projection onto a subset of indices (common matrix completion problems);
  • (2) Partial discrete cosine transform (DCT) operator;
  • (3) Partial discrete Walsh-Hadamard transform (DWHT) operator.


The attendant paper is: An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem by Junfeng Yang and Xiaoming Yuan. The abstract reads:

The trace norm regularized least squares problem (TNRLSP) captures a broad spectrum of applications including the multi-task learning problem in machine learning. However, it is challenging to solve TNRLSP efficiently, especially for large-scale cases. In this paper, we show that the classical alternating direction method (ADM) is readily applicable for TNRLSP. Each iteration of the resulted ADM algorithm consists of three subproblems, one of which is difficult in the sense that the analytical solution is not available. We suggest to linearize the difficult subproblem and thus obtain the linearized approximated subproblem whose analytical solution can be easily derived. Hence, the inexact ADM approach is developed in this paper to treat TNRLSP. We then show that the inexact ADM approach can be easily extended to solve some other trace-norm-involved problems which are closely related to TNRLSP. Numerical results are reported to demonstrate the feasibility and efficiency of the inexact ADM approach.

Thanks Junfeng for the heads-up!


Andrew Gelman mentions some recent work on One of the fastest generalized linear model implementations

The Wired Article on compressed sensing entitled Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples by Jordan Ellenberg was featured on his blog. It led to many spam blogs re-using the story as is. It also looks like a good many folks misunderstood some concepts but this a good experiment for crowdsourcing a FAQ. Let us note for instance that a misunderstood view of compressed sensing is embodied in the misleading title: Fancy Math Allows For Near-Perfect Enhancement of Poor-Quality Images [Math]. The Google helped me find some of the more interesting blogs commenting on the subject. One of them is Andrew Hires' Brain Windows that featured a paper I missed entirely it seems from NIPS 2009 entitled: Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME) by Tao Hu and Dmitri B. Chklovskii. The abstract of the paper reads:

One of the central problems in neuroscience is reconstructing synaptic connectivity in neural circuits. Synapses onto a neuron can be probed by sequentially stimulating potentially pre-synaptic neurons while monitoring the membrane voltage of the post-synaptic neuron. Reconstructing a large neural circuit using such a “brute force” approach is rather time-consuming and inefficient because the connectivity in neural circuits is sparse. Instead, we propose to measure a post-synaptic neuron’s voltage while stimulating sequentially random subsets of multiple potentially pre-synaptic neurons. To reconstruct these synaptic connections from the recorded voltage we apply a decoding algorithm recently developed for compressive sensing. Compared to the brute force approach, our method promises significant time savings that grow with the size of the circuit. We use computer simulations to find optimal stimulation parameters and explore the feasibility of our reconstruction method under realistic experimental conditions including noise and non-linear synaptic integration. Multineuronal stimulation allows reconstructing synaptic connectivity just from the spiking activity of post-synaptic neurons, even when sub-threshold voltage is unavailable. By using calcium indicators, voltage-sensitive dyes, or multi-electrode arrays one could monitor activity of multiple postsynaptic neurons simultaneously, thus mapping their synaptic inputs in parallel, potentially reconstructing a complete neural circuit.

And Andrew Hires may be right when he says that:

Unfortunately, I don’t think it [Compressed Sensing] is very applicable to situations where signal/noise is poor, like counting action potentials in shot-noise limited in vivo calcium imaging.
but you never really know until you look at the problem deeper or with different sensors.

Sometimes when I read the Wired article, I feel like something of inimaginable wonder has been missed. Let me take the example of the 1-bit compressive sensing example as featured in Optical imaging using binary sensors by Aurelien Bourquard, Francois Aguet, and Michael Unser. The abstract reads:
This paper addresses the problem of reconstructing an image from 1-bit-quantized measurements, considering a simple but nonconventional optical acquisition model. Following a compressed-sensing design, a known pseudo-random phase-shifting mask is introduced at the aperture of the optical system. The associated reconstruction algorithm is tailored to this mask. Our results demonstrate the feasibility of the whole approach for reconstructing grayscale images.

Next, we have from arxiv: Multiarray Signal Processing: Tensor decomposition meets compressed sensing by Lek-Heng Lim and Pierre Comon. The abstract reads:

We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniqueness of a best rank-r approximation of a tensor. In particular, we obtain a computationally feasible variant of Kruskal's uniqueness condition with coherence as a proxy for k-rank. We treat sparsest recovery and lowest-rank recovery problems in a uniform fashion by considering Schatten and nuclear norms of tensors of arbitrary order and dictionaries that comprise a continuum of uncountably many atoms.

The next paper is behind a paywall: Compressed sensing in optical coherence tomography by Nishant Mohan, Ivana Stojanovic, and W. C. Karl, Bahaa E. A. Saleh, Malvin Teich. The abstract reads:
Optical coherence tomography (OCT) is a valuable technique for non-invasive imaging in medicine and biology. In some applications, conventional time-domain OCT (TD-OCT) has been supplanted by spectral-domain OCT (SD-OCT); the latter uses an apparatus that contains no moving parts and can achieve orders of magnitude faster imaging. This enhancement comes at a cost, however: the CCD array detectors required for SD-OCT are more expensive than the simple photodiodes used in TD-OCT. We explore the possibility of extending the notion of compressed sensing (CS) to SD-OCT, potentially allowing the use of smaller detector arrays. CS techniques can yield accurate signal reconstructions from highly undersampled measurements, i.e., data sampled significantly below the Nyquist rate. The Fourier relationship between the measurements and the desired signal in SD-OCT makes it a good candidate for compressed sensing. Fourier measurements represent good linear projections for the compressed sensing of sparse point-like signals by random under-sampling of frequency-domain data, and axial scans in OCT are generally sparse in nature. This sparsity property has recently been used for the reduction of speckle in OCT images. We have carried out simulations to demonstrate the usefulness of compressed sensing for simplifying detection schemes in SD-OCT. In particular, we demonstrate the reconstruction of a sparse axial scan by using fewer than 10 percent of the measurements required by standard SD-OCT.

My webcrawler alerted me to an upcoming paper by Maxim Raginsky, Sina Jafarpour, Rebecca Willett and Robert Calderbank entitled Fishing in Poisson streams: focusing on the whales, ignoring the minnows. I am eager to read it but what is interesting is that it reminded of an example I wanted to use to explain compressive sensing. The measurements were the loading measured by fishnets of different sizes while being pulled by a boat. In the end, with fishnets of different sizes, being tried and released, one could evaluate the number and size of the fish population. This is the closest I could think of a physical embodiement of diracs and sinusoids.


And finally what looks like a course blog, we can read about Multiple Target Localization Using Compressive Sensing while Gordon Anderson blogs on Matching Buyers & Sellers – solving NxN problem with chaos.

No comments:

Post a Comment