Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Sunday, November 20, 2011

Brains are Low Rank Solvers (in part)



I learned about Daniel Wolpert from this cover in Nature

I am sure there is an entire field of sports analytics that is already quantifying the prior of each and every players using techniques such as this low rank decomposition featured in Tennis Players are Sparse !




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, October 01, 2011

Dreaming reconstructions

Laurent Duval pointed out to me the similarities between this reconstruction from local descriptors/features [1]


and this reconstruction of images from scans [2]....






Good catch.

References:
[2] Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, Jack L. Gallant

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

Thursday, September 22, 2011

Why am I thinking there is a need for a better dictionary learning or calibration algorithm ?

From the latest Current Biology:

Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies

Authors

Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, Jack L. Gallant 
Highlights

  • A new motion-energy model can describe BOLD signals evoked by natural movies
  • The model reveals how motion information is represented in early visual areas
  • Speed tuning in human early visual areas depends on eccentricity
  • The model provides reconstructions of natural movies from evoked BOLD signals

Summary

Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1,2] and can form the basis for brain decoding devices [3,4,5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6,7,8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10,11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.
The rest of the paper can be seen here. Of specific interest are the following two videos:





The left clip is a segment of the movie that the subject viewed while in the magnet. The right clip shows the reconstruction of this movie from brain activity measured using fMRI. The reconstruction was obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately.






This video is organized as folows: the movie that each subject viewed while in the magnet is shown at upper left. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject.

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

Tuesday, September 14, 2010

CS: The long post of the week : Ambulatory ECG, The brain and much mathematics...

This is how I wanted to start my post  today," Today we have mostly a post with a mathematical bent and two of them deal with ridgelets, mmmuhh, I also note that the inclusion of Compressive Sensing as a building block to our comprehension of the brain is building up ...Enjoy!" But then I checked the Twitter, and found out that Pierre Vandergheynst tweeted that he had posted his slide from a presentation in Marseille. So let us start with that and then you can all go for the mathematics:

Pierre Vandergheynst's slides are entitled Compressed Sensing,  Some experimental promises ?. Our french readers should be expecting to see images of slide 22 in La Recherche coming out in October. Aside from the the very nice MRI work mentioned before, I liked the use of the Donoho-Tanner phase transition to evaluate a hardware system.

Some new element of the presentation also included a very interesting trade study on a low-power ECG ambulatory system.

So it looks like this is not optimal in many respect. Yes, you reduce the power level but not with the dramatic effect initially expected. As I have said before, if you design your hardware to be providing exactly the same information as the one you are supposed to displace, then you are really asking to be irrelevant.....fast. The CS encoding not only has a clear advantage over no compression as it is robust to packet losses but more importantly, if Compressive Sensing is bringing one thing to the table, it is really that information is reduced and therefore that smaller classifiers can run some diagnostics on the jailbroken iPhone 3GS, I doubt that a similar classifier can do the job on the uncompressed data without using much needed power resources.

Coming back to the new papers and preprints, let us start with a more hardware oriented paper before diving into more mathematical matters:

Spatially regularized compressed sensing of diffusion MRI data by Oleg Michailovich, Yogesh Rathi. The abstract reads:
The present paper introduces a method for substantial reduction of the number of diffusion encoding gradients required for reliable reconstruction of HARDI signals. The method exploits the theory of compressed sensing (CS), which establishes conditions on which a signal of interest can be recovered from its under-sampled measurements, provided that the signal admits a sparse representation in the domain of a linear transform. In the case at hand, the latter is defined to be spherical ridgelet transformation, which excels in sparsifying HARDI signals. What makes the resulting reconstruction procedure even more accurate is a combination of the sparsity constraints in the diffusion domain with additional constraints imposed on the estimated diffusion field in the spatial domain. Accordingly, the present paper describes a novel way to combine the diffusion- and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy. Finally, details are provided on a particularly efficient numerical scheme which can be used to solve the aforementioned reconstruction problem by means of standard and readily available estimation tools. The paper is concluded with experimental results which support the practical value of the proposed reconstruction methodology.

At NIPS the accepted papers list came up but I could not find the actual presentations or papers so I went ahead and looked up the authors and found the following two: Adaptive compressed sensing – a new class of self-organizing coding models for neuroscience by William K. Coulter, Christopher J. Hillar, Guy Isely, Friedrich T. Sommer. The abstract reads:
Sparse coding networks, which utilize unsupervised learning to maximize coding efficiency, have successfully reproduced response properties found in primary visual cortex [1]. However, conventional sparse coding models require that the coding circuit can fully sample the sensory data in a one-to-one fashion, a requirement not supported by experimental data from the thalamo-cortical projection. To relieve these strict wiring requirements, we propose a sparse coding network constructed by introducing synaptic learning in the framework of compressed sensing. We demonstrate that the new model evolves biologically realistic spatially smooth receptive fields despite the fact that the feedforward connectivity subsamples the input and thus the learning has to rely on an impoverished and distorted account of the original visual data. Further, we demonstrate that the model could form a general scheme of cortical communication: it can form meaningful representations in a secondary sensory area, which receives input from the primary sensory area through a “compressing” cortico-cortical projection. Finally, we prove that our model belongs to a new class of sparse coding algorithms in which recurrent
connections are essential in forming the spatial receptive fields.

and Statistical Mechanics of Compressed Sensing by Surya Ganguli, Haim Sompolinsky. The abstract reads:
Compressed sensing (CS) is an important recent advance that shows how to reconstruct sparse high dimensional signals from surprisingly small numbers of random measurements. The nonlinear nature of the reconstruction process poses a challenge to understanding the performance of CS. We employ techniques from the statistical physics of disordered systems to compute the typical behavior of CS as a function of the signal sparsity and measurement density.We find surprising and useful regularities in the nature of errors made by CS, a new phase transition which reveals the possibility of CS for nonnegative signals without optimization, and a new null model for sparse regression.

Also found on Arxiv:
Average best m-term approximation by Jan Vybıral. The abstract reads:
We introduce the concept of average best m-term approximation widths with respect to a probability measure on the unit ball of ℓn p . We estimate these quantities for the embedding id : ℓn p → ℓn q with 0 \lt p \le q \le ∞ for the normalized cone and surface measure. Furthermore, we consider certain tensor product weights and show that a typical vector with respect to such a measure exhibits a strong compressible (i.e. nearly sparse) structure.
Detection boundary in sparse regression by Yuri Ingster. , Alexander Tsybakov, and Nicolas Verzelen.. The abstract reads:
We study the problem of detection of a p-dimensional sparse vector of parameters in the linear regression model with Gaussian noise. We establish the detection boundary, i.e., the necessary and sufficient conditions for the possibility of successful detection as both the sample size n and the dimension p tend to the infinity. Testing procedures that achieve this boundary are also exhibited. Our results encompass the high-dimensional setting (p \gt\gt n). The main message is that, under some conditions, the detection boundary phenomenon that has been proved for the Gaussian sequence model, extends to high-dimensional linear regression. Finally, we establish the detection boundaries when the variance of the noise is unknown. Interestingly, the detection boundaries sometimes depend on the knowledge of the variance in a high-dimensional setting.
Consider a real diagonal deterministic matrix $X_n$ of size $n$ with spectral measure converging to a compactly supported probability measure. We perturb this matrix by adding a random finite rank matrix of a certain form, with delocalized eigenvectors. We show that the joint law of the extreme eigenvalues of the perturbed model satisfies a large deviation principle, in the scale n with a good rate function given by a variational formula. We tackle both cases when the extreme eigenvalues of X_n converge to the edges of the support of the limiting measure and when we allow some eigenvalues of X_n, that we call outliers, to converge out of the bulk. We can also generalise our results to the case when X_n is random, with law proportional to e^{- Trace V(X)}d X, for V growing fast enough at infinity and any perturbation of finite rank with orthonormal eigenvectors.

Consider a deterministic self-adjoint matrix X_n with spectral measure converging to a compactly supported probability measure, the largest and smallest eigenvalues converging to the edges of the limiting measure. We perturb this matrix by adding a random finite rank matrix with delocalized eigenvectors and study the extreme eigenvalues of the deformed model. We show that the eigenvalues converging out of the bulk exhibit Gaussian fluctuations, whereas under additional hypotheses, the eigenvalues sticking to the edges are very close to the eigenvalues of the non-perturbed model and fluctuate in the same scale. We can also generalise these results to the case when X_n is random and get similar behaviour when we deform some classical models such as Wigner or Wishart matrices with rather general entries or the so-called matrix models.

Learning Functions of Few Arbitrary Linear Parameters in High Dimensions by Massimo Fornasier, Karin Schnass, Jan Vybíral. The abstract reads:
Let us assume that $f$ is a continuous function defined on the unit ball of $\mathbb R^d$, of the form $f(x) = g (A x)$, where $A$ is a $k \times d$ matrix and $g$ is a function of $k$ variables for $k \ll d$. We are given a budget $m \in \mathbb N$ of possible point evaluations $f(x_i)$, $i=1,...,m$, of $f$, which we are allowed to query in order to construct a uniform approximating function. Under certain smoothness and variation assumptions on the function $g$, and an {\it arbitrary} choice of the matrix $A$, we present in this paper 1. a sampling choice of the points $\{x_i\}$ drawn at random for each function approximation; 2. an algorithm for computing the approximating function, whose complexity is at most polynomial in the dimension $d$ and in the number $m$ of points. Due to the arbitrariness of $A$, the choice of the sampling points will be according to suitable random distributions and our results hold with overwhelming probability. Our approach uses tools taken from the {\it compressed sensing} framework, recent Chernoff bounds for sums of positive-semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions.
A variant of the Johnson-Lindenstrauss lemma for circulant matrices by Jan Vybíral. The abstract reads:
We continue our study of the Johnson-Lindenstrauss lemma and its connection to circulant matrices started in \cite{HV}. We reduce the bound on $k$ from $k=O(\epsilon^{-2}\log^3n)$ proven there to $k=O(\epsilon^{-2}\log^2n)$. Our technique differs essentially from the one used in \cite{HV}. We employ the discrete Fourier transform and singular value decomposition to deal with the dependency caused by the circulant structure.
An attendant presentation on the subject is here.

Finally we have: A Hierarchical Bayesian Framework for Constructing Sparsity-inducing Priors by Anthony Lee, Francois Caron, Arnaud Doucet, Chris Holmes. The abstract reads:

Variable selection techniques have become increasingly popular amongst statisticians due to an increased number of regression and classification applications involving high-dimensional data where we expect some predictors to be unimportant. In this context, Bayesian variable selection techniques involving Markov chain Monte Carlo exploration of the posterior distribution over models can be prohibitively computationally expensive and so there has been attention paid to quasi-Bayesian approaches such as maximum a posteriori (MAP) estimation using priors that induce sparsity in such estimates. We focus on this latter approach, expanding on the hierarchies proposed to date to provide a Bayesian interpretation and generalization of state-of-the-art penalized optimization approaches and providing simultaneously a natural way to include prior information about parameters within this framework. We give examples of how to use this hierarchy to compute MAP estimates for linear and logistic regression as well as sparse precision-matrix estimates in Gaussian graphical models. In addition, an adaptive group lasso method is derived using the framework

A new version of Compressed Genotyping is available.



Image Credit: NASA/JPL/Space Science Institute
N00163047.jpg was taken on September 10, 2010 and received on Earth September 11, 2010. The camera was pointing toward JANUS, and the image was taken using the CL1 and CL2 filters.

Monday, August 02, 2010

Towards a Mathematical Theory of Cortical Micro-circuits

I just noticed this paper from PLOS: Towards a Mathematical Theory of Cortical Micro-circuits by Dileep George, Jeff Hawkins. The abstract reads:
The theoretical setting of hierarchical Bayesian inference is gaining acceptance as a framework for understanding cortical computation. In this paper, we describe how Bayesian belief propagation in a spatio-temporal hierarchical model, called Hierarchical Temporal Memory (HTM), can lead to a mathematical model for cortical circuits. An HTM node is abstracted using a coincidence detector and a mixture of Markov chains. Bayesian belief propagation equations for such an HTM node define a set of functional constraints for a neuronal implementation. Anatomical data provide a contrasting set of organizational constraints. The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others. We describe the pattern recognition capabilities of HTM networks and demonstrate the application of the derived circuits for modeling the subjective contour effect. We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.
You can create your own vision experiment or use for free some of the Vision Demos at the company created  to support this model (Numenta). I also  note from the paper:
In the case of a simplified generative model, an HTM node remembers all the coincidence patterns that are generated by the generative model. In real world cases, where it is not possible to store all coincidences encountered during learning, we have found that storing a fixed number of a random selection of the coincidence patterns is sufficient as long as we allow multiple coincidence patterns to be active at the same time. Motivation for this method came from the field of compressed sensing [20]. The HMAX model of visual cortex [21] and some versions of convolutional neural networks [22] also use this strategy. We have found that reasonable results can be achieved with a wide range of the number of coincidences stored.
Compressed Sensing and HMAX in the same sentence, uhh..., this echoes some of the observation made a while back here:
This is absolutely outstanding!

By the way, I still own a Handspring Visor, a machine created by Jeff Hawkins. Let us hope that this software breaks a new market like the Visor did in its time.
He is a video of Jeff at TED back in 2003. When are we going to have a speaker at TED on Compressed Sensing proper ? Hey I volunteer on any one of the tech featured in These Technologies Do Not Exist.





Photo: Lincoln from afar and closeby, Dali Museum in Rosas.

Friday, April 23, 2010

Octopus smarts

In the past week, everybody has been in amused by the video of an octopus picking up a camera and "running" with it. Here is the video:



But in terms of being in awe, you should really check this subject extracted from the TV show Thalassa (a show on french television that is focused on sea related matters). I cannot seem to embed the video here so here is the address :

It will be on the website for a little while before it is removed from the site. The story telling is in French (so you may want to reduce the volume). The actual octopus part of the show starts at 55 minutes and 5 seconds. It is just amazing.

To give you an idea,
  • At 57:15, you can see an Octopus "walk" between different pools to follow humans or go get some food in the other pools.
  • At 59:05 you'll see an Octopus learn how to unscrew a can cap on his own.
  • The most amazing to me is the different shape it can take (at 1h19:10 they show different shapes and colors it can take in different circumstances).
  • At 1h37 you can see David Edelman doing some experimentation on the peer-learning process of an octopus
  • and more...

Being totally fascinated by the subject, I asked David when he would publish on this, he responded with:

Thanks for your email, as well as interest in our work. The studies with Dr. Fiorito in Naples are ongoing; we hope to publish later in the year. I'll keep you apprised of our progress.


I can't wait!

Saturday, February 13, 2010

CS: Acoustic Kaleidoscope, reducing interference in EEG and visual resolution and cone spacing in the human fovea.


While attending the seminar of Maxime Dahan on Fluorescence imaging using compressed sensing, one of the audience member - I'm told it was Mickael Tanter - mentioned the time reversal kaleidoscope that has some very eery similarity with other concepts of using random materials to perform measurements like the Random Lens Imager. The paper that talks about it (behind a paywall) is The time reversal kaleidoscope: a new concept of smart transducers for 3d ultrasonic imaging by Gabriel Montaldo, Delphine Palacio, Mickael Tanter and Matthias Fink. The abstract reads:
The design of 2D arrays for 3D ultrasonic imaging is a major challenge in medical and non-destructive applications. Thousands of transducers are typically needed for beam focusing and steering in 3D volumes. Here, we report a completely new approach for producing 3D images with a small number of transducers using the combined concepts of time reversal mirrors and chaotic reverberating cavities. Due to multiple reverberations inside the cavity, a “kaleidoscopic” transducer array is created with thousands of virtual transducers equivalent to 2D matrices. Beyond the scope of 3D medical imaging, this work leads to the new concept of “smart” transducer.
It is interesting that they can do imaging in some analog fashion, I am sure that a compressive sensing step could provide a computational element that would (dramatically) enhance the resulting images and even provide some superresolution. We'll see.

As some of you know I am interested in a compressive sensing EEG system. This past week, the following interesting reference showed up on my radar screen: High-quality recording of bioelectric events. Part 1 Interference reduction, theory and practice. by A. C. Metting van Rijn A. Pepar C. A Grimbergen.

My webcrawler found the following paper published in Nature: The relationship between visual resolution and cone spacing in the human fovea by Ethan Rossi, Austin Roorda. [supplemental information] [video1][video2] [video3]. The abstract reads:
Visual resolution decreases rapidly outside of the foveal center. The anatomical and physiological basis for this reduction is unclear. We used simultaneous adaptive optics imaging and psychophysical testing to measure cone spacing and resolution across the fovea, and found that resolution was limited by cone spacing only at the foveal center. Immediately outside of the center, resolution was worse than cone spacing predicted and better matched the sampling limit of midget retinal ganglion cells.
This paper led me to another older one: Psychophysical estimate of extrafoveal cone spacing by Nancy J. Coletta and David R. Williams. The abstract reads:
In the extrafoveal retina, interference fringes at spatial frequencies higher than the resolution limit look like twodimensional spatial noise, the origin of which has not been firmly established. We show that over a limited range of high spatial frequencies this noise takes on a striated appearance, with the striations running perpendicular to the true fringe orientation. A model of cone aliasing based on anatomical measurements of extrafoveal cone position predicts that this orientation reversal should occur when the period of the interference fringe roughly equals the spacing between cones, i.e., when the fringe spatial frequency is about twice the cone Nyquist frequency. Psychophysical measurements of the orientation reversal at retinal eccentricities from 0.75 to 10 deg are in quantitative agreement with this prediction. This agreement implies that at least part of the spatial noise observed under these conditions results from aliasing by the cone mosaic. The orientation reversal provides a psychophysical method for estimating spacing in less regular mosaics, complementing another psychophysical technique for measuring spacing in the more regular mosaic of foveal cones [D. R. Williams, Vision Res. 25, 195 (1985); Vision Res. (submitted)].
In it one can read:
Previously, Yellott16 had proposed that disorder of the cone lattice prevents aliasing by smearing high spatial frequencies into broadband noise. The present data confirm that, although the aliasing noise is indeed smeared, it is still accessible to psychophysical observation. In fact, this aliasing noise can be used to draw inferences about the spacing of cones.
In light of this entry on Tilings, Islamic Art, Our Retina, Papoulis Sub-Nyquist Theorem, I had talked to Austin Roorda but never made that discussion into a blog entry. I am such a putz. I should unearth it sometimes.

Credit: NASA / JPL, pale blue dot, earth as seen from Voyager.

Sunday, August 30, 2009

Is your brain pulsing blood in order to work ?


The latest arxiv blog entry features an article entitled The Thermodynamics of Human Reaction Times by Moscoso del Prado. The blog entry makes a claim about the speed at which information travels in the brain (you need to read the update in the blog entry by Moscoso del Prado in the blog entry itself). The information processing seems to be in the realm of about 60 bit per second. This is fascinating as this number is close to the 30 bits/second rate found in the information transferred through acoustic mud-pulsing in the Measurement While Drilling techniques used by the likes of companies like Schlumberger to transfer information from the tip of the drill bit all the way up to the operator at the surface. Should we expect a similar phenomenon in the brain where blood is being slightly pulsed ? At the very least, this would definitely provide some type of basis for using fMRI as a way of defining information activity in the brain.

Saturday, April 18, 2009

CS: Compressive Sensing in the Dark ?

I was reading the following summary of the following paper: The Nucleus Inside Out - Through a Rod Darkly by Tobias Ragoczy and and Mark Groudine which abstract is:

In the nuclei of eukaryotic cells, euchromatin is located at the center, whereas heterochromatin is found at the periphery and is interspersed in the nucleoplasm. Solovei et al. (2009) now reveal that this normal pattern is reversed in the retinal rod cells of mice. This inversion might serve to maximize light transmission to photoreceptors in nocturnal mammals.



In the paper (see the summary), the researchers did a computation showing how the DNA location is acting as a focusing element as in a fiber optics (as shown in the figure above). What triggered my interest in this figure is the following: In the focusing case, a pencil of light coming in gets to be transferred as two peaks with the underlying assumption that two large peaks will trigger some type of clean response. In the case of mammals, the pencil of light coming in gets to be spread around with the underlying assumption that it will be drowned in noise and will not be detectable.

Here is my stupid question: You and I can see in the dark after some accommodation. Does this mean that our vision system switches to a compressive sensing system when in the dark ? It would give a new meaning to the name of this blog!

Monday, December 08, 2008

CS: GPUCamp, Bring the Popcorn: ETVC'08 videos, Terry Tao's CS presentation and a new version of CVX




Let's bring out the popcorn, Frank Nielsen just let me know that the videos of the talks of ETVC'08 that occured two weeks ago are now available. As one can see (below) the lists features some of the people and techniques mentioned in this blog before. I particulary like the fact that some of these techniques are going to be increasinlgy important as folks are looking at performing tasks directly on the CS measurements as opposed to first performing reconstruction. 
This year, the international colloquium of LIX (Ecole Polytechnique) focuses on the emerging trends and challenges of the foundations of the cross-disciplinary area of visual computing. Visual computing encompasses computational geometry, computer graphics, machine vision and learning (just to name a few), and relies at its very heart on information geometry. Visual computing is underpinning major industrial applications as attested recently by the emerging fields of computational photography, 3D cinematography and advanced biomedical imaging.
The official website of the workshop is here. The videos of the talks are listed below:

Wednesday, October 01, 2008

CS: Tilings, Islamic Art, Our Retina, Papoulis Sub-Nyquist Theorem

[Note: This is an open ended entry, sorry, I am trying to put these pieces together ]

As a I was re-reading Yves Meyer's recent presentation entitled: Compressed sensing and transference (the paper is here "A variant of compressed sensing "), I also came across his reference to some Islamic Art. If you recall, Yves Meyer has looked at quasicrystals tiling and has seen a connection with how to sample a positive function and reconstruct it with a linear programming step. It so happens that the connection between quasicrystals and Islamic art was made by Peter Lu as a hobby while being a graduate student. He has a beautiful video presentation on the subject of Quasicrystals in Medieval Islamic Architecture.


I liked the connection to the non-sticking pan material that messes up the phonon transport. At 45 minutes in the presentation, Paul begins to explain the non-periodic aspect of these tilings and the last question by a person in the audience is potentially of interest when it comes to sampling on manifolds.

The Science paper is here and the additional material is here. Paul Steinhardt the co-author of the study has a webpage on the tiling description and goes on to say in the "implications" section:



The new paradigm implies a closer physical relationship between quasicrystals and crystals. Now it appears that both can be described in terms of the close-packing of a single cluster or unit cell. In a crystal, the unit cell packs edge-to-edge with its neighbors. Quasicrystals correspond to a generalization in which the quasi-unit cells overlap. In both cases, the formation of the particular structure appears to be explained by a low-energy atomic cluster, although the atomic arrangement in the case of quasicrystals is constrained to allow overlap. Hence, the new paradigm makes plausible why many materials form quasicrystals and, at the same time, explains why quasicrystals are less common than crystals.....The new paradigm requires a mechanism to explain how quasicrystals grow. If the quasicrystals are grown slowly, then thermodynamic relaxation to the ground state is possible. However, some of the most perfect quasicrystal samples, including AlNiCo , are formed by rapid quenching. In this volume, Socolar has described a scheme for solids equivalent to Penrose patterns based on obtuse and acute rhombi using vertex rules and stochastic growth similar to diffusion limited aggregation. This approach can be adapted to overlapping clusters. (Janot[13] has already suggested a similar mechanism for overlapping clusters, although his vertex rules allow random tilings as well as perfect quasiperiodic tilings.) If quasicrystals form due to a particular cluster being energetically favored, a simpler kinematic mechanism may be through local atomic rearrangement that increases the local density of the given atomic cluster.

Hmmm, so we don't have a good way of explaining away the difference between random tilings and quasicrystal ones. Looks like we have a similar issue in compressive sampling where the connection between the tiling of Yves Meyer and Basarab Matei is not well connected to the generic random projections used in generic compressive sensing (expect maybe through the suggestion of Thong Do, Trac Tran and Lu Gan to include them in Structurally Random Matrices approach, in this case the analog FFT is performed by the lens). Furthermore, we also still have to characterize if our own pixel tiling located in our retina is also really that random as seen in these exceptional shots from Austin Roorda at Berkeley [17].



where it can be noted some type of hexagonal tiling in the Fourier world [16]

While reading these papers, a paper by Yue M. Lu, Minh Do and Richard Laugesen came out. It is entitled:



A computable Fourier condition generating alias-free sampling lattices. The abstract reads:

We propose a Fourier analytical condition linking alias-free sampling with the Fourier transform of the indicator function defined on the given frequency support. Our discussions center around how to develop practical computation algorithms based on the proposed analytical condition. We address several issues along this line, including the derivation of simple closed-form expressions for the Fourier transforms of the indicator functions defined on arbitrary polygonal and polyhedral domains; a complete and nonredundant enumeration of all quantized sampling lattices via the Hermite normal forms of integer matrices; and a quantitative analysis of the approximation of the original infinite Fourier condition by using finite computations. Combining these results, we propose a computational testing procedure that can efficiently search for the optimal alias-free sampling lattices for a given polygonal or polyhedral shaped frequency domain. Several examples are presented to show the potential of the proposed algorithm in multidimensional filter bank design, as well as in applications involving the design of efficient sampling patterns for multidimensional bandlimited signals.

an extract of this paper reads:

On a broader scale, alias-free sampling is mathematically equivalent to the lattice packing of a given domain, for which lots of studies can be found in disciplines such as computational geometry and operational research. So far, most practical algorithms proposed for densest lattice packing (e.g. [14]–[16]) approach the problem from a geometrical perspective. The primary tools employed are the theories from Minkowski’s work [13], as well as various geometrical intuitions and heuristics obtained for particular domains in lower dimensions......Before proceeding, we make some remarks on the scope of this paper. First, we restrict our attention to the scenario in which the continuous signals are sampled on a single lattice. Consequently, the minimum sampling rate we pursue is the Nyquist rate, which is achieved by those lattices that can pack the frequency support D in the tightest way. We note that it is possible to go below the Nyquist rate if we can sample the continuous signals with multiple channels and on multiple lattices (see, e.g., [19], [20]).
Let us recall that the subdivision scheme in the quasicrystal presentation seem to clearly fall under the class of multiple lattices. When doing a search on the IEEE xplore database reference 1 through 15 show up when refering to the 1977 Papoulis' Generalized sampling expansion [5]. I note a hardware based on this sub-nyquist sampling approach in [15]. I still cannot make much of it, so if somebody has an insight, I would gladly welcome it.

Talking about Compressive Sampling, the Computer Science Department at University of Alberta has a reading group on the subject.


References:

1. On well-posedness of the Papoulis generalized sampling expansion, Brown, J.L., Jr.; Cabrera, S.D.; Circuits and Systems, IEEE Transactions on, Volume 38, Issue 5, May 1991 Page(s):554 - 556

2. Generalized sampling expansion on lattices, Izen, S.H.; Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Transactions on], Volume 53, Issue 6, June 2005 Page(s):1949 - 1963

3. Optimal generalized sampling expansion, Seidner, D.; Feder, M.; Acoustics, Speech, and Signal Processing, 1999. ICASSP '99. Proceedings., 1999 IEEE International Conference on, Volume 3, 15-19 March 1999 Page(s):1637 - 1640 vol.3

4. On the necessity of Papoulis' result for multidimensional GSE, Feuer, A.; Signal Processing Letters, IEEE. Volume 11, Issue 4, April 2004 Page(s):420 - 422

5. Generalized sampling expansion, Papoulis, A.; Circuits and Systems, IEEE Transactions on, Volume 24, Issue 11, Nov 1977 Page(s):652 - 654

6. Extension of a finite version of the sampling theorem, De Sabata, A.; Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on. Volume 41, Issue 12, Dec. 1994

7. Incomplete sampling series and the recovery of missing samples from oversampled band-limited signals, Ferreira, P.J.S.G.; Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Transactions on]
Volume 40, Issue 1, Jan. 1992 Page(s):225 - 227

9. Filterbank reconstruction of bandlimited signals from nonuniform and generalized samples, Eldar, Y.C.; Oppenheim, A.V.; Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Transactions on, Volume 48,
Issue 10, Oct. 2000 Page(s):2864 - 2875

10. Vector sampling expansion, Seidner, D.; Feder, M.; Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Transactions on] Volume 48, Issue 5, May 2000 Page(s):1401 - 1416

11. Filter bank interpolation and reconstruction from generalized and recurrent nonuniform samples, Eldar, Y.C.; Oppenheim, A.V.; Acoustics, Speech, and Signal Processing, 2000. ICASSP '00. Proceedings. 2000 IEEE International Conference on, Volume 1, 5-9 June 2000 Page(s):324 - 327 vol.1

12. 3D super-resolution using generalized sampling expansion, Shekarforoush, H.; Berthod, M.; Zerubia, J.; Image Processing, 1995. Proceedings., International Conference on, Volume 2, 23-26 Oct. 1995 Page(s):300 - 303 vol.2

13. Sampling reconstruction of N-dimensional band-limited images after multilinear filtering, Brown, J.L., Jr.; Sa-ngsari, K.; Circuits and Systems, IEEE Transactions on Volume 36, Issue 7, July 1989 Page(s):1035 - 1038

14. On generalized sampling expansions for deterministic signals, Figueiras-Vidal, A.; Marino-Acebal, J.; Gomez, R.; Circuits and Systems, IEEE Transactions on Volume 28, Issue 2, Feb 1981 Page(s):153 - 154

15. Sampling below the Nyquist rate in interferometric fluorescence microscopy with multi-wavelength measurements to remove aliasing, Davis, B.J.; Karl, W.C.; Goldberg, B.B.; Swan, A.K.; Unlu, M.S.; Digital Signal Processing Workshop, 2004 and the 3rd IEEE Signal Processing Education Workshop. 2004 IEEE 11th, 1-4 Aug. 2004 Page(s):329 - 333

16. Duncan,J.L., Zhang,Y., Gandhi,J., Nakanishi,C., Othman,M., Branham,K.H., Swaroop,A., Austin Roorda High resolution imaging of foveal cones in patients with inherited retinal degenerations using adaptive optics, Invest.Ophthalmol.Vis.Sci. 48: 3283-3291 (2007)

17. Austin Roorda, A., Williams, D.R., The Arrangement of the Three Cone Classes in the Living Human Eye Nature 397, 520-522 (1999).

Tuesday, September 02, 2008

CS: Noninvasive Leakage Power Tomography of Integrated Circuits by CS and Dense Error Correction via l1 Minimization

Today, we have two very interesting papers from the same time zone:

First, here is another example of inverse problems for electrical engineers investigated by Davood Shamsi, Petros Boufounos and Farinaz Koushanfar in Noninvasive Leakage Power Tomography of Integrated Circuits by Compressive Sensing. The abstract reads:

We introduce a new methodology for noninvasive post-silicon characterization of the unique static power profile (tomogram) of each manufactured chip. The total chip leakage is measured for multiple input vectors in a linear optimization framework where the unknowns are the gate leakage variations. We propose compressive sensing for fast extraction of the unknowns since the leakage tomogram contains correlations and can be sparsely represented. A key advantage of our approach is that it provides leakage variation estimates even for inaccessible gates. Experiments show that the methodology enables fast and accurate noninvasive extraction of leakage power characteristics.

It looks as though one is mapping an electronics board onto an image map in order to "spariify" the model and then use Compressive Sensing to perform the equivalent of a group testing problem yielding a 30% improvement over an l2 based technique.

The second paper is another intriguing paper by John Wright and Yi Ma, who are some of the authors behind the face recognition technique using l1 minimization. Well, this time, they go further in a work that is likely to yield some penetrating results. As Yi Ma mentioned to me, they show that "... it is probably the first work showing L1 can actually deal with dense errors or signals..". If one of you thinks otherwise, please let the authors know.

In Dense Error Correction via l1 Minimization, John Wright and Yi Ma introduce us to the following:

This paper studies the problem of recovering a non-negative sparse signal
from highly corrupted linear measurements , where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an l1-minimization problem:

min ||x||_1 + ||e||_1 subject to y = Ax + e

More precisely, if the fraction of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, then as m goes to infinity, the above l1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of iid Gaussian vectors with nonzero mean and small variance, which we call the “cross and-bouquet” model. Simulations and experimental results corroborate the findings, and suggest extensions to the result.

One way to look at this is that l1 minimization based codes produce results that OMP-like algorithm cannot produce, which in effect would give them a second life :). More fascinating is the real possibility that following this paper, we begin to obtain non trivial results on how the primary virtual cortex work. Recall, we all recognize the faces of family members in a very cluttered scenes, the results of this paper show that this type of classification is very robust which is the reason I really, really like it. For the moment, I am not quite sure I understand the RIP condition of the lemma 5 of the Appendix but this is just a question for it to sink in. I'd rather post about it here first.

Friday, August 15, 2008

CS: Solving the Helmoltz equation in the ground and in the air, Large Scale Approximate Inference and the source code for a model of the visual cortex



The folks at SLIM/UBC (Felix Herrmann, Yogi Erlangga and Tim Lin) are at it again: Solving the Helmoltz equation. A good way of introducing oneself to the Compressive Sensing approach is by reading the presentation entitled: Compressive sampling meets seismic imaging from which the two images are extracted. The abstract of the presentation reads:

Compressive sensing has led to fundamental new insights in the recovery of compressible signals from sub-Nyquist samplings. It is shown how jittered subsampling can be used to create favorable recovery conditions. Applications include mitigation of incomplete acquisitions and wavefield computations. While the former is a direct adaptation of compressive sampling, the latter application represents a new way of compressing wavefield extrapolation operators. Operators are not diagonalized but are compressively sampled reducing the computational costs.

A more in-depth analysis can be found in Compressive simultaneous full-waveform simulation by the same authors Felix Herrmann, Yogi Erlangga and Tim Lin. The abstract reads:


The fact that the numerical complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is the main impediment within seismic imaging. By turning simulation into a compressive sensing problem---where simulated data is recovered from a relatively small number of independent simultaneous sources---we remove this impediment by showing that compressively sampling a simulation is equivalent to compressively sampling the sources, followed by solving a reduced system. As in compressive sensing, this allows for a reduction in sampling rate and hence in simulation costs. We demonstrate this principle for the time-harmonic Helmholtz solver. The solution is computed by inverting the reduced system, followed by a recovery of the full wavefield with a sparsity promoting program. Depending on the wavefield's sparsity, this approach can lead to significant cost reductions, in particular when combined with the implicit preconditioned Helmholtz solver, which is known to converge even for decreasing mesh sizes and increasing angular frequencies. These properties make our scheme a viable alternative to explicit time-domain finite-differences.

In a different area of engineering, one still needs to solve the Helmoltz equation for target imaging as shown in three-dimensional sparse-aperture moving-target imaging by Matthew Ferrara, Julie Jackson, Mark Stuff. The abstract reads:

If a target’s motion can be determined, the problem of reconstructing a 3D target image becomes a sparse aperture imaging problem. That is, the data lies on a random trajectory in k-space, which constitutes a sparse data collection that yields very low-resolution images if backprojection or other standard imaging techniques are used. This paper investigates two moving-target imaging algorithms: the first is a greedy algorithm based on the CLEAN technique, and the second is a version of Basis Pursuit Denoising. The two imaging algorithms are compared for a realistic moving-target motion history applied to a Xpatch-generated backhoe data set.

Matthias Seeger produced a talk on Large Scale Approximate Inference and Experimental Design for Sparse Linear Models. The video of the talk can be found here.


A while ago, I drew some comparison between this model of the primary visual cortex (A Feedforward Architecture Accounts for Rapid Categorization) and compressive sensing. The authors just released the source code. This code provides a framework for reproducing the main experimental result described in: Thomas Serre, Aude Oliva and Tomaso Poggio , "A feedforward architecture accounts for rapid categorization", Proceedings of the National Academy of Sciences, vol. 104, no. 15, 2007. The description and installation instructions are here. One can download the code here.


Wednesday, April 09, 2008

Compressed Sensing: a blog, Autonomous geometric precision error estimation in low-level computer vision tasks, Neuroscience Datasets

3D rendering of a DEM used for the topography of MarsImage via Wikipedia
I just recently found a blog that seems to focus on Compressed Sensing. In all, I have to say that Google is not doing a great job at taking into account labels from its own platform. In particular, I have found some odd discrepancies between the pagerank of some pages and the number of people that use Google reader to view them. Anyway, the blog of Andrés Corrada-Emmanuel entitled De Rerum Natura brings up some interesting applications in the determination of elevation models (DEM) from photographs as he has to deal with underdetermined systems of equations.. The postings are at:


he just had his paper accepted at ICML 08. The title is Autonomous geometric precision error estimation in low-level computer vision tasks by Andrés Corrada-Emmanuel and Howard Schultz. The abstract reads:

Errors in map-making tasks using computer vision are sparse. We demonstrate this by considering the construction of digital elevation models that employ stereo matching algorithms to triangulate real-world points. This sparsity, coupled with a geometric theory of errors recently developed by the authors, allows for autonomous agents to calculate their own precision independently of ground truth. We connect these developments with recent advances in the mathematics of sparse signal reconstruction or compressed sensing. The theory presented here extends the autonomy of 3-D model reconstructions discovered in the 1990s to their errors.

This is new. I don't think I have ever seen anybody talking about precision errors being a sparse signals before.


With the Netflix competition, we saw a lot of very interesting developments. One of them, as Andrew Gelman points out, is the fact that with a flurry of algorithms to choose from, data is paramount in leading to new and improved findings. An example of something similar in Neuroscience is the neuron challenge mentioned before. However, as echoed in this entry by Hal Daume III ( Those Darn Biologists...) there is very little incentive for biologists to publish their results with an analysis using new algorithms: This is not getting their papers published. A new initiative may help in that respect: the CRCNS - Collaborative Research in Computational Neuroscience - Data sharing activity is making biological / neuroscience datasets available for download here. This is prodigious idea. It currently lists datasets in:



While The following additional data sets will be available by about June 2008:
  • V4 responses to synthetic parametric stimuli. (From Jack Gallant lab, UC Berkeley).
  • Responses in areas V1, V2 and V4 using precisely the same stimuli. Data collected to facilitate functional comparisons across successive stages of sensory processing. (From Jack Gallant lab, UC Berkeley).
  • Tutorial on understanding intracellular recordings in sensory areas and accompanying data. The data are intracellular (whole-cell patch) recordings obtained in vivo from visual, auditory, somatosensory, and motor areas of the neocortex by the laboratories of Judith Hirsch, USC; Anthony Zador, CSHL; Michael DeWeese, UC Berkeley and Michael Brecht, Humboldt University Berlin. These data include not only spikes but also membrane voltages or currents generated by synaptic connections and intrinsic membrane channels.
  • Synaptic plasticity data – Cortical slice data acquired in order to examine the effects of complex spike trains in the induction of long-term synaptic modification and recordings of primary visual cortical neurons made during stimulation. (From Yang Dan lab, UC Berkeley).
  • Recordings from hippocampal CA1 neurons during open field foraging. (From Buzsáki lab, Rutgers University).


Printfriendly