Friday, June 28, 2013

Around the blogs in 78 hours: CIMI2013, CVPR, Listening in The Wild, ERMITES 2011 and more....

Besides CVPR this week, there was also the CIMI2013 workshop on "Optimization and Statistics in Image Processing" in Toulouse that I heard about through Pierre;s twitter feed.. A program at-a-glance is available online here. A pdf version of the program is available online here.
Slides of the talks are provided below (when available).

Monday 24th, 08:30 - 09:30 : Registration

Monday 24th, 09:30 - 10:15 : Opening
Chairs: Michel Ledoux, François Malgouyres, Denis Kouamé and Jean-Yves Tourneret

Monday 24th, 10:15 - 11:45 : Classification & segmentation
Chairs: Mario Figueiredo and Jean-Yves Tourneret
10:15 - 11:00 : Alfred Hero, University of Michigan, USA
11:00 - 11:45 : Raymond Chan, Chinese University of Hong-Kong, China
Monday 24th, 13:30 - 15:00 : Bayesian methods I
Chair: José Bioucas Dias
13:30 - 14:15 : Stephen McLaughlin, Heriot-Watt University, Edinburgh, Scotland
14:15 - 15:00 : Xavier Descombes, INRIA Sophia-Antipolis, France
Monday 24th, 15:30 - 17:00 : Biomedical imaging I
Chairs: Jeffrey Fessler and Denis Kouamé
15:30 - 16:15 : Jean-Philippe Thiran, Ecole Polytechnique Fédérale de Lausanne, Switzerland
16:15 - 17:00 : Jean-Christophe Olivo-Marin, Institut Pasteur, France
Tuesday 25th, 09:00 - 11:45 : Graphical and geometry-based methods I
Chairs: Raymond Chan, François Malgouyres and Steve McLaughlin
09:00 - 09:45 : Clem Karl, Boston University, USA
09:45 - 10:30 : Olivier Lezoray, Université de Caen, France
11:00 - 11:45 : Boaz Nadler, Weizmann Inst. of Science, Israel
Tuesday 25th, 13:30 - 15:00 : Bayesian methods II
Chairs: Xavier Descombes and Nicolas Dobigeon
13:30 - 14:15 : Rafael Molina, Universidad de Granada, Spain
14:15 - 15:00 : Philippe Ciuciu, CEA Saclay, France
Tuesday 25th, 15:30 - 17:00 : Biomedical imaging II
Chairs: Jean-Christophe Olivo-Marin and Adrian Basarab
15:30 - 16h15 : Jeffrey Fessler, University of Michigan, USA
16h15 - 17:00 : Françoise Peyrin, INSA de Lyon, France
Wednesday 26th, 09:00 - 11:45 : Theoretical results in inverse problems I
Chairs: Mike Davies and Steve McLaughlin
09:00 - 09:45 : Gabriel Peyré, Université Paris Dauphine, France
09:45 - 10:30 : Jalal Fadili, Université de Caen, France
11:00 - 11:45 : Mike Davies, University of Edinburgh, Edinburgh, Scotland
Wednesday 26th, 13:30 - 15:00 : Theoretical results in inverse problems II
Chair: Philippe Ciuciu
13:30 - 14:15 : Mila Nikolova, Ecole Normale Supérieure de Cachan, France
14:15 - 15:00 : Cédric Herzet, INRIA Rennes, France
Wednesday 26th, 15:30 - 17:00 : Filtering
Chairs: Jérôme Idier and Herwig Wendt
15:30 - 16h15 : Peyman Milanfar, University of California, Santa Cruz, USA
16h15 - 17:00 : Fredrik Andersson, Lund University, Sweden
Thursday 27th, 09:00 - 11:45 : Compressed sensing/imaging
Chairs: Gabriel Peyré Ami Wiesel and Guillermo Sapiro
09:00 - 09:45 : Yonina Eldar, Technion, Israel
09:45 - 10:30 : José M. Bioucas-Dias, Instituto Superior Técnico, Lisboa, Portugal
11:00 - 11:45 : Pierre Vandergheynst, Ecole Polytechnique Fédérale de Lausanne, Switzerland
Thursday 27th, 13:30 - 15:00 : Algorithms for inverse problems I
Chairs: José Bioucas Dias and Yonina Eldar
13:30 - 14:15 : Guillermo Sapiro, University of Minnesota, USA
14:15 - 15:00 : Mario Figueiredo, Instituto Superior Técnico, Lisboa, Portugal
Thursday 27th, 15:30 - 17:00 : Algorithms for inverse problems II
Chairs Jérémie Bigot and Mila Nikolova
15:30 - 16:15 : Daniel Cremers, Technische Universität München, Germany
16:15 - 17:00 : Jérôme Idier, IRCCyN, France
Friday 28th, 9:00 - 10:15 : Poster Session

Friday 28th, 10:15 - 11h45 : Graphical and geometry-based methods II
Chair: Alfred O. Hero
10:15 - 11:00 : Gabriele Steidl, Technische Universität Kaiserslautern, Germany
11:00 - 11:45 : Ami Wiesel, Hebrew University of Jerusalem, Israel
Friday 28th, 14:00 - 16:00 : CIMI Colloquium
Chairs: Denis Kouamé, François Malgouyres and Jean-Yves Tourneret

In particular, I liked this tweet:

Because there is indeed the sense that dictionary should be fed with results of (very complex) forward models such as those produced by codes like MCNP or GEANT4.

Also this week, Mark alerted me to Listening in the Wild meeting in London. The program and abstract booklet is here. This sort of research and the one listed after fit right into this Just on the right side of Impossible business models (i.e Communicating with Animals) I mentioned a while back.

INVITED SPEAKERS:
Thierry Aubin (Université Paris Sud, Orsay)
Communication in seabird colonies: vocal recognition in a noisy world
David Clayton (Queen Mary University of London)
Investigating a link between vocal learning and rhythm perception using the zebra finch as a model animal
Maria Chait (University College London)
Change detection in complex acoustic scenes
Richard Turner (University of Cambridge)
Auditory scene analysis and the statistics of natural sounds
Marc Naguib (Wageningen University, The Netherlands)
Noise effects on communication in song birds
Jon Barker (University of Sheffield)
Machine listening in unpredictable "multisource" environments: Lessons learnt from the CHiME speech recognition challenges
Rachele Malavasi (Institute for Coastal Marine Environment, Oristano)
Auditory objects in a complex acoustic environment: the case of bird choruses
Mathieu Lagrange (IRCAM, Paris & IRCCYN, Nantes)
Machine Listening in Complex Environments: Some challenges in understanding musical and environmental sounds
Dan Stowell (Queen Mary University of London)
Machine listening for birds: analysis techniques matched to the characteristics of bird vocalisations
Which reminded of an email by Herve Glotin a while pointing to the ERMITES 2011 workshop on Sparse Decomposition, Contraction and Structuration for Complex Scene Analysis. Videos and slides are available directly from the booklet. (Lectures are in French but the slides are in English) and at least one presentation was about the sound of whales and more. 

J-P. Haton
« Analyse de Scène et Reconnaissance Stochastique de la Parole »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_1sur4.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_2sur4.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_3sur4.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_4sur4.mp4
X. Halkias
 « Detection and Tracking of Dolphin Vocalizations »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Halkias.mp4
M. Kowalski
« Sparsity and structure for audio signal: a *-lasso therapy »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_1sur5.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_2sur5.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_3sur5.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_4sur5.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_5sur5.mp4
J. Razik
 « Sparse coding : from speech to whales »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Razik.mp4
Y. Bengio
 «Apprentissage Non-Supervisé de Représentations Profondes »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_1sur4.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_2sur4.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_3sur4.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_4sur4.mp4
O. Adam
« Estimation de Densité de Population de Baleines par Analyse de leurs Chants »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Adam.mp4
S. Mallat
 « Scattering & Matching Pursuit for Acoustic Sources Separation »
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_1sur3.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_2sur3.mp4
http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_3sur3.mp4
If you are interested in that subject and more, you might even be interested in this 30th International workshop on Cetacean echolocation and outer space neutrinos: ethology and physics for an interdisciplinary approach to underwater bioacoustics and astrophysical particles detection. What's the connection between Whales and neutrinos, check the program out (and the fact that most neutrino detection are in deep sea.


Finally, on the blogs, we had the ever interesting:


Dusty: TWO CHAPTERS IN FINITE FRAMES: THEORY AND APPLICATIONS
Congrats to Bob for getting one of those 2013 Reproducible Research Prizes
James: Separated Sets In Unions Of Frames
Danny
Suresh: Cake cutting Algorithms2physics: Quantum Information at Low Light
Vladimir:
"....Eric Fossum made available his latest papers on QIS, DIS, ToF on-line, including the best poster award win at IISW 2013 (first link):
S. Chen, A. Ceballos, and E.R. Fossum, Digital Integration Sensor, IISW 2013
S. Masoodian, Y. Song, D. Hondongwa, JJ Ma, K. Odame and E.R. Fossum, Early Research Progress on Quanta Image Sensors, IISW 2013
Y. M. Wang, I. Ovsiannikov, S-J Byun, T-Y Lee, Y. Lee, G. Waligorski, H. Wang, S. Lee, D-K Min, Y.D. Park, T-C Kim, C-Y Choi, G.S. Han, and E.R. Fossum, Compact Ambient Light Cancellation Design and Optimization for 3D Time-of-Flight Image Sensors, IISW 2013
E.R. Fossum, Quanta Image Sensor (QIS): Early Research Progress (invited) in Proc. 2013 OSA Topical Meeting on Imaging Systems, Arlington, VA USA June 24-27, 2013..."
Hein: “The Next Great Era: Envisioning A Robot Society”
Dick It Takes Guts To Do Research
Robots.net Steve: Random Robot Roundup
Tom
Suresh


Greg: Get yours here: Coffee Can Radar Kits to ship in August
David: CCCG acceptances
Sebastien: ICML and isotropic position of a convex body
Randy: Laser guided codes advance single pixel terahertz imaging
Patrick:
Stephen Cass: Joys of Noise.
Greg: R.I.P. Richard Matheson (1926-2013)

Meanwhile on Nuit Blanche:



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization

What if techniques of compressive sensing could be used to help techniques of compressive sensing (and many others) ? this is the subject of the following paper that looks at finding a sparse Hessian for derivative-free optimization computations. I also wonder how this could be applied to learning Kernels.

Interpolation-based trust-region methods are an important class of algorithms for Derivative-Free Optimization which rely on locally approximating an objective function by quadratic polynomial interpolation models, frequently built from less points than there are basis components. Often, in practical applications, the contribution of the problem variables to the objective function is such that many pairwise correlations between variables are negligible, implying, in the smooth case, a sparse structure in the Hessian matrix. To be able to exploit Hessian sparsity, existing optimization approaches require the knowledge of the sparsity structure. The goal of this paper is to develop and analyze a method where the sparse models are constructed automatically. The sparse recovery theory developed recently in the field of compressed sensing characterizes conditions under which a sparse vector can be accurately recovered from few random measurements. Such a recovery is achieved by minimizing the l1-norm of a vector subject to the measurements constraints. We suggest an approach for building sparse quadratic polynomial interpolation models by minimizing the l1-norm of the entries of the model Hessian subject to the interpolation conditions. We show that this procedure recovers accurate models when the function Hessian is sparse, using relatively few randomly selected sample points. Motivated by this result, we developed a practical interpolation-based trust-region method using deterministic sample sets and minimum l1-norm quadratic models. Our computational results show that the new approach exhibits a promising numerical performance both in the general case and in the sparse one.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 27, 2013

Physical Principles for Scalable Neural Recording

The following is an analysis of what would be needed to have access to most events of interest in the brain. 



Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical,magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. We also study the physics of powering and communicating with microscale devices embedded in brain tissue.

I note the following:

While optics might seem to require a number of photodetectors, fibers or waveguide ports comparable to the number of neurons, new developments suggest ways of imaging with fewer elements. For example, compressive sensing or ghost imaging techniques based on random mask projections [20, 38, 57, 100] might allow a smaller number of photodetectors to be used. In an illustrative case, an imaging system may be constructed simply from a single photodetector and a transmissive LCD screen presenting a series of random binary mask patterns [12], where the number of required mask patterns is much smaller than the number of image pixels due to a compressive reconstruction. Furthermore, it is possible to directly image through gradient index of refraction (GRIN) lenses [34] or optical fibers [13, 65, 103], thus multiplexing multiple observed neurons per fiber.
Yes. mutliplexing would be good :-) and reading this chart,
I am thinking there might be other good reasons as to why you'd want compressive sensing related techniques. More on that later.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, June 26, 2013

Blind Calibration in Compressed Sensing using Message Passing Algorithms

This is interesting!

Blind Calibration in Compressed Sensing using Message Passing Algorithms by Christophe Schülke, Francesco Caltagirone, Florent Krzakala, Lenka Zdeborová
Compressed sensing (CS) is a concept that allows to acquire compressible signals with a small number of measurements. As such it is very attractive for hardware implementations. Therefore, correct calibration of the hardware is a central is- sue. In this paper we study the so-called blind calibration, i.e. when the training signals that are available to perform the calibration are sparse but unknown. We extend the approximate message passing (AMP) algorithm used in CS to the case of blind calibration. In the calibration-AMP, both the gains on the sensors and the elements of the signals are treated as unknowns. Our algorithm is also applicable to settings in which the sensors distort the measurements in other ways than multiplication by a gain, unlike previously suggested blind calibration algorithms based on convex relaxations. We study numerically the phase diagram of the blind calibration problem, and show that even in cases where convex relaxation is possible, our algorithm requires a smaller number of measurements and/or signals in order to perform well.


One notes the potential use of this calibration framework to deal with a much larger set of cases (other than multiplicative noise). In particular, if you recall the recent point of view given in the Sunday Morning Insight on a Quick Panorama of Sensing from Direct Imaging to Machine Learning, there does not seem to be a problem to extend the current function h to those found in nonlinear compressive sensing, in particular quantization or even the successful  autoencoders as found in deep learning


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, June 25, 2013

Quikr: a Method for Rapid Reconstruction of Bacterial Communities via Compressive Sensing. - implementation -

In line with similar techniques, compressive sensing has the ability to reduce tremensdously tasks based on comparing large sets of large elements (see And so it begins ... Compressive Genomics), here we are looking at reducing the size of the samples through CS techniques in order to fast forward the phylogenic tree construction. Recall that we have potentially 7 billion microbiomes (the following study looks at microbiomes) and most of them are sparse over time. That's a lot of sparse objects to gather and the real question probably becomes, Instead of recognizing it is sparse and producing outstanding method for comparison, maybe we should look at sensors that get the compressed information in the first place. Without further due here is:



Abstract. Many metagenomic studies compare hundreds to thousands of environmental and health-related samples by extracting and sequencing their 16S rRNA amplicons and measuring their similarity using beta-diversity metrics. However, one of the first steps - to classify the operational taxonomic units withing the sample - can be a computationally time-consuming task since most methods rely on computing the taxonomic assignment of each individual read out of tens to hundreds of thousands of reads. We introduce Quikr: a QUadratic, K-mer based, Iterative,Reconstruction method which computes a vector of taxonomic assignments and their proportions in the sample using an optimization technique motivated from the mathematical theory of compressive sensing. On both simulated and actual biological data, we demonstrate that Quikr istypically more accurate as well as typically orders of magnitude faster than the most commonly utilized taxonomic assignment technique (the Ribosomal Database Project’s Naıve Bayesian Classifier). Furthermore, the technique is shown to be unaffected by the presence of chimeras thereby allowing for the circumvention of the time-intensive step of chimera filtering. The Quikr computational package (using MATLAB or Octave) for the Linux and Mac platforms is available at http://sourceforge.net/projects/quikr/.
The Quikr page is here while an implementation is available here.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, June 24, 2013

Sunday Morning Insight: Enabling the "Verify" in "Trust but Verify" thanks to Compressive Sensing

In last week's Quick Panorama of Sensing from Direct Imaging to Machine Learning, I made the case that sensing could be seen from different point of views that are themselves embedded in different academic communities yet are essentially doing the same thing with different approaches. Today, instead of giving a deeper view of these different approaches, I'd like to give some perspective as to why sensors and sensing in general is important economically. First, here is a little known fact: the betterment of sensors has consistently delivered Nobel Prizes. However, the generic view of the public and policy makers is that we already have CSI like technology at the tip of our fingers and that given some human ingenuity generally displayed by the actors (and enough time) we can "solve" problems. I noticed that in most episodes, the fingerprinting of materials seems to be a given. I really don't know where the screenwriters get that impression because it is the somewhat most difficult part of the identification process.



Lack of good sensors: An unacknowledged economic side effect.

I believe this perception issue, which bubbles up all the way to policy makers, is at the root of many problematic. There are numerous economic activities that are currently uncontrolled solely because there is an asymmetry between the making of products with certain materials and the control of whether these products are made of said materials. The system works in a "trust" regime rather than effectively a "trust but verify" regime. Every country has had that problem. In China, for instance, there have been instances of fraud that have led to deaths and widespread market distortions. In France recently, the complaint of only one person led to the shutdown and recall of a medication that eventually was cleared. In the US, the FDA has a freely available database for recalls. All countries have in some shape or fashion some issues with how their regulations are enforced on local products and foreign exports. The sheer magnitude of the world trade makes it a near impossible task to enforce local rules on imported goods. All these cases and attendant warning systems are really the sometimes adhoc result of the lengthy process of material fingerprinting that typically requires long weeks in the lab. In short, CSI stories and Hollywood in general impress on the public -and lawmakers- the general notion that the technology behind entire countries' rules, laws and regulations that are protecting people's health and safety is available, immediate and cheap. Nothing could be further from the current realities in sensing.

Helping Reality: Better Fingerprinting through Compressive Sensing

Since sensing the right elements quickly through some signature is utmost importance for world trade and is probably very important to minimize major market distortions, can new methods help in developing faster and probably more task specific sensors ? 

Maybe.

One of the most important lessons of the compressive sensing adventure, in my view, is that it has allowed randomization to be taken seriously.  That randomization in turn has allowed us to devise sensors away from traditional direct imaging into compressive sensing. And look where this is taking us: Just watch some of the CS hardware implementations and some of the start-ups that have used it. And it's only the beginning. To get a sense of the cost reduction enabled by randomization, let us take the case of hyperspectral imagers. Currently these cameras cost about 100,000 buckarus. Thanks to the multiplexing allowed by compressive sensing, there are several groups trying to decrease this cost by one or two orders of magnitude. Randomization is also at the heart of the recent fingerprinting attempts in MRI. In short a deep mathematical statement on concentration of measure does seem to provide a way to design better and cheaper sensors or imagine new ones [1,2]. 

Compressive Sensing, The Internet of Things, Big Data and Machine Learning.

Cost reduction has two main consequences: a larger footprint in the academic world yielding a larger sphere of influence in tackling different problematic. The second effect has to do with the ability to build sensor networks the size of a planet. For instance, during the unfolding of Fukushima Daiichi accident, it became obvious that citizen sensor networks such as SafeCast gave a more robust view to decision makers and the population of how events were unfolding. Coupled with computational codes running plume diffusion, and you had a potentially pretty powerful predictive mechanism. All this because of the availability of a tiny and somewhat cheap and undiscriminative Geiger counter. Some of these costs could be further reduced if only one were to surf on the steamrollers like Moore's law: I am personally of the opinion that much of the fear related to radiation could be dampened if one were to have Google Glass-like capabilities to detect radiation surrounding us. To show that, Cable and I showed that in a highly radiative environment the radiation field could be simply decoupled from CMOS imagery through a robust deconvolution ( It never was noise; Just a different convolution, see also the videos in [3-5]). In an area around Fukushima or elsewhere where the radiation is much lower, a different procedure would have to be used to provide real time information to the general population and while I sympathize with the Geiger counter effort of SafeCast, I could see CMOS taking over that detection market in the future. The purists who have read Glenn Knoll's Radiation Detection and Measurement will rightfully argue that silicon is not the best detector material for this type of task. To which I will argue that a combination of better converters (or multiplexer as we call them in compressive sensing) and the economies of scale of CMOS wil largely, in the end, win that fight. And with CMOS comes big data and mechanisms found in Machine Leanring to reduce it to human understandable concepts. 

To come back to SafeCast, the project is now embarking in a larger worldwide air pollution quantification effort. In the home, there is a similar effort like AirBoxLab now featured on IndieGoGo -a Kickstarter-like platform- that aims at quantifying indoor air pollution. The kit features the following sensors: 
  • VOC: Formaldehydes, benzene, ethylene glycol, acetone.
  • CO2: Carbon dioxide
  • CO: Carbon monoxide
  • PM: Particulate Matter
  • T: Temperature
  • RH: Relative Humidity




AirBoxLab has the potential to produce large amount of data thanks to the capability of sampling  not just ambient air but also surface effluents. This is interesting as it clearly is a way to build a large database of products and attendant effluents that can seldom be undertaken by states (check out how small NASA's or ESA's Outgassing databases are [6]) or even traditional NGOs. A large database like this one would clearly be a treasure trove not just for machine learners or for enforcement purposes but could eventually yield virtuous economic cycles.  

Better sensors are always needed

In the case of the air quality of SafeCast or AirBoxLab, one realizes that there is a dearth of sensors that ought to be researched in light of the development in compressive sensing [2]. A whole VOC sensor is a first step that is needed in order to know when to ventilate your home, but eventually, one wants to fingerprint only those VOCs that are toxic from the non toxic ones. Recently at one of the Paris meetup, I was told of a device that never went into market because while it partially destroyed VOCs, some of the byproducts of the destruction process included smaller quantities of other VOCs within the family of sarin gas. The total concentration of VOC was reduced at the expense of increasing the potentially lethality of the byproduct. In short, while it is always a good thing to have an idea of total VOCs. it also is a good idea to know exactly what type of COVs are being measured. Here again we are witness to better or more discriminate sensing being the engine behind other technology development (VOC processing and disposition) and eventual economic growth. 


For those of you in Paris, I'll be attending the Meetup Internet des Objets n°1 this coming Tuesday night.

Friday, June 21, 2013

Multi-View in Lensless Compressive Imaging

We originally had a one pixel Lensless Imaging using Compressive Sensing, we now have two pixels and using joint sparsity to reconstruct the view:



Multi-View in Lnesless Compressive Imaging by Hong Jiang, Gang Huang, Paul Wilford

Multi-view images are acquired by a lensless compressive imaging architecture, which consists of an aperture assembly and multiple sensors. The aperture assembly consists of a two dimensional array of aperture elements whose transmittance can be individually controlled to implement a compressive sensing matrix. For each transmittance pattern of the aperture assembly, each of the sensors takes a measurement. The measurement vectors from the multiple sensors represent multi-view images of the same scene. We present theoretical framework for multi-view reconstruction and experimental results for enhancing quality of image using multi-view. 

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Towards a better compressed sensing

I believe that after reading this paper, I am getting a better sense as to how the Donoho-Tanner phase transition is being beaten through structured sparsity. 


Towards a better compressed sensing
Mihailo Stojnic

In this paper we look at a well known linear inverse problem that is one of the mathematical cornerstones of the compressed sensing field. In seminal works \cite{CRT,DOnoho06CS} $\ell_1$ optimization and its success when used for recovering sparse solutions of linear inverse problems was considered. Moreover, \cite{CRT,DOnoho06CS} established for the first time in a statistical context that an unknown vector of linear sparsity can be recovered as a known existing solution of an under-determined linear system through $\ell_1$ optimization. In \cite{DonohoPol,DonohoUnsigned} (and later in \cite{StojnicCSetam09,StojnicUpper10}) the precise values of the linear proportionality were established as well. While the typical $\ell_1$ optimization behavior has been essentially settled through the work of \cite{DonohoPol,DonohoUnsigned,StojnicCSetam09,StojnicUpper10}, we in this paper look at possible upgrades of $\ell_1$ optimization. Namely, we look at a couple of algorithms that turn out to be capable of recovering a substantially higher sparsity than the $\ell_1$. However, these algorithms assume a bit of "feedback" to be able to work at full strength. This in turn then translates the original problem of improving upon $\ell_1$ to designing algorithms that would be able to provide output needed to feed the $\ell_1$ upgrades considered in this papers.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Around the blogs in 78 hours



We are several days aways from the full shutdown of Google Reader. In order to be sure you have access to any of the Nuit Blanche entries offline, you can Subscribe to Nuit Blanche by Email or subscribe to my feed to use on some other RSS Feed Readers such as Feedly, TheOldReader. Instapaper. You can join the Google+ Community or the CompressiveSensing subreddit as I will keep on posting the entries of the blog there. In the meantime, here are some interesting blog entries this past week:




Danny


Frank

Hein
Dick
Bob

Dirk
Hal
Sebastien

Anand


John


Larry
Cam


Christian

Timothy
Brian


Robots.net


In the meantime, On Nuit Blanche, we had:



Image Credit: NASA/JPL-Caltech
This image was taken by Rear Hazcam: Right B (RHAZ_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 310 (2013-06-20 13:55:06 UTC).
Full Resolution

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 20, 2013

Finite rate of innovation based modeling and compression of ECG signals - implementation -

I want to be wrong but I believe this is the first time we have an implementation of finite rate of innovation technique made available. 




Mobile health is gaining increasing importance to society and the quest for new power efficient devices sampling biosignals is becoming critical. We discuss a new scheme called Variable Pulse Width Finite Rate of Innovation (VPW-FRI) to model and compress ECG signals. This technique generalizes classical FRI estimation to enable the use of a sum of asymmetric Cauchy-based pulses for modeling electrocardiogram (ECG) signals. We experimentally show that VPW-FRI indeed models ECG signals with high precision. In addition, we study the compression efficiency of the method: compared with various widely used compression schemes, we showcase improvements in terms of both accuracy and compression rate while sampling at a lower rate.

Access to the code can had here

Precisely Verifying the Null Space Conditions in Compressed Sensing: A Sandwiching Algorithm

Weiyu Xu just sent me the following:

"...
We designed one SANDWICHING algorithm which can efficiently and precisely verify the well-known null space conditions in compressed sensing, which is famously known to hard. The feature of the algorithm is that it can find the EXACT value for the null space properties, with much reduced computational complexity than exhaustive search, for example, reducing a computational load of around 1 month (32) days to 4 hours. ... 
Best Regards,
Xu,Weiyu
"


This is very interesting! Thanks Weiyu !




In this paper, we propose new efficient algorithms to verify the null space condition in compressed sensing (CS). Given an $(n-m) \times n$ ($m>0$) CS matrix $A$ and a positive $k$, we are interested in computing $\displaystyle \alpha_k = \max_{\{z: Az=0,z\neq 0\}}\max_{\{K: |K|\leq k\}}$ $\frac{\|z_K \|_{1}}{\|z\|_{1}}$, where $K$ represents subsets of $\{1,2,...,n\}$, and $|K|$ is the cardinality of $K$. In particular, we are interested in finding the maximum $k$ such that $\alpha_k < \frac{1}{2}$. However, computing $\alpha_k$ is known to be extremely challenging. In this paper, we first propose a series of new polynomial-time algorithms to compute upper bounds on $\alpha_k$. Based on these new polynomial-time algorithms, we further design a new sandwiching algorithm, to compute the \emph{exact} $\alpha_k$ with greatly reduced complexity. When needed, this new sandwiching algorithm also achieves a smooth tradeoff between computational complexity and result accuracy. Empirical results show the performance improvements of our algorithm over existing known methods; and our algorithm outputs precise values of $\alpha_k$, with much lower complexity than exhaustive search.





Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, June 19, 2013

FrameSense: Near-Optimal Sensor Placement for Linear Inverse Problems / Acoustic echoes reveal room shape - implementation -



Juri  just sent me the following:

Dear Igor,


I follow your blog since I was working on my MSc thesis in 2009 and I always appreciated your efforts to spread news and interesting works related to CS and sparse signal processing.
.... I would like to point out a couple of recent results obtained in my lab that are of possible interest for the readers of your blog:

1) The first one, talks about an algorithm for near-optimal sensor placement to solve a linear inverse problem. It has some interesting similarities with CS and it is the first algorithm that has guaranteed performance w.r.t. MSE. It can be also viewed as the selection of $L$ out of $N$ rows of a matrix $\Psi$, such that the spectrum of $\Psi*\Psi$ has some favorable properties.
http://infoscience.epfl.ch/record/186804

2) This one just appeared on PNAS and is getting a big media coverage. It is about recovering the shape of a room using a set of microphones. You have already in the past talked about this work, when you featured an ICASSP paper wrote by Dokmanic et al. This journal paper has stronger results and shows some results obtained in a couple of real-word experiments.


Best,
Thanks Juri , the second paper got some press, it even made the front page of Reddit. What is fascinating in this thread is the variety of comments and how many people mistakenly think  this is not news. The second insight here is that when I am being asked what a sensor is, that question is invariably limited to one sensor while even small sensor network have the possibility of providing much more information. Here are the two papers:



A classic problem is the estimation of a set of parameters from measurements collected by few sensors. The number of sensors is often limited by physical or economical constraints and their placement is of fundamental importance to obtain accurate estimates. Unfortunately, the selection of the optimal sensor locations is intrinsically combinatorial and the available approximation algorithms are not guaranteed to generate good solutions in all cases of interest. We propose FrameSense, a greedy algorithm for the selection of optimal sensor locations. The core cost function of the algorithm is the frame potential, a scalar property of matrices that measures the orthogonality of its rows. Notably, FrameSense is the first algorithm that is near-optimal in terms of mean square error, meaning that its solution is always guaranteed to be close to the optimal one. Moreover, we show with an extensive set of numerical experiments that FrameSense achieves the state-of-the-art performance while having the lowest computational cost, when compared to other greedy methods.


Acoustic echoes reveal room shape
Ivan DokmanićaReza Parhizkara, Andreas Walthera, Yue M. Lub, and Martin Vetterlia

Imagine that you are blindfolded inside an unknown room. You snap your fingers and listen to the room’s response. Can you hear the shape of the room? Some people can do it naturally, but can we design computer algorithms that hear rooms? We show how to compute the shape of a convex polyhedral room from its response to a known sound, recorded by a few microphones. Geometric relationships between the arrival times of echoes enable us to “blindfoldedly” estimate the room geometry. This is achieved by exploiting the properties of Euclidean distance matrices. Furthermore, we show that under mild conditions, first-order echoes provide a unique description of convex polyhedral rooms. Our algorithm starts from the recorded impulse responses and proceeds by learning the correct assignment of echoes to walls. In contrast to earlier methods, the proposed algorithm reconstructs the full 3D geometry of the room from a single sound emission, and with an arbitrary geometry of the microphone array. As long as the microphones can hear the echoes, we can position them as we want. Besides answering a basic question about the inverse problem of room acoustics, our results find applications in areas such as architectural acoustics, indoor localization, virtual reality, and audio forensics.
The attendant code to duplicate the result of this study is here.


Printfriendly