Saturday, September 29, 2012

Nuit Blanche's Month in Review (September 2012)

While accidental cameras [17, 19] are just that: accidental, several papers covered some hardware implementations [1,8,9,13]. I liked the fact that we are getting closer to the sensing elements as this is truly how radical new hardware will eventually prevail. It should be noted that several algorithm implementations were made available by researchers [5,6,12,25] and using a new tensor decomposition [6]  we may have something that goes as fast as sFFT [7]. This maybe a new route for other endeavors in compressive sensing as this tensor decomposition is SVD based and could potentially use some of the Robust PCA results. This is of note because many of the development performed in compressive sensing from 2D images, phase retrieval data, 3D view or 2D + hyperspectral data to 5D data such as found in plenoptic cameras revolve around some unacknowledged tensor decomposition. The typical beginner's question on how to perform compressed sensing on an image is a witness to that unacknowledged issue (you need to reshape a 2D image into a 1D vector).

The other idea that is driving people's thoughts these days is the need to go beyond the concept of sparsity [2,3,21] and use prior information like complexity or else. [3]. We need to know what constraints and priors  will make both CS and Machine Learning meet [3]. But there is also some urgency as Rich reminds us [15] in that "We generated more data than there are stars in the Universe" [14]. Let's put some context here, this month saw some new benchtop sequencers shipping off. With the devices we should expect sequence genome in under a day. Still no word on Oxford Nanopore but if you recall their announcement back in February as Nick Loman reported (their new technology goes through DNA one piece at a time i.e. no need to cut it off in several smaller parts like other technology)

....For actual base-reads, ONT still has not achieved single-base sensitivity (though Brown did mention they are working on it). Instead they are reading three bases at a time, leading to 64 different current levels. They then apply the Viterbi algorithm – a probabilistic tool that can determine hidden states – to these levels to make base calls at each position.
In the same post, one can read that they have an 96% reading accuracy which is very low in that business and the reason probably they haven't gotten a machine out yet, but they are hiring people who can make sense of their data. All this to say that soon enough, the actual counting of stars won't mean much compared to what we are about to produce. It will also radically change healthcare. Sequencing and sensing as provided by coarser devices like the one mentioned at the very beginning of this post are in fact complementary.  

In the realm of under-appreciated problems, I mentioned the Reddit Database [22]. I need to come back to that later. 

This month, we also had a lot slew of interesting papers [16]. Finally, we discussed the business of science stretching from peer review to reproducible research in several entries [20,22, 24,26, 18]. There were also two workshop [10, 13] and  two job announcements [11,23]

  1. Passive millimeter-wave imaging with compressive sensing
  2. Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardware
  3. From Compressive Sensing to Machine Learning
  4. Alternating Direction Methods for Classical and Ptychographic Phase Retrieval
  5. High-accuracy wave field reconstruction using Sparse Regularization - implementations -
  6. Sunday Morning Insight: QTT format and the TT-Toolbox -implementation-
  7. Another Super Fast FFT
  8. Compressed Sensing for the multiplexing of PET detectors
  9. A single-photon sampling architecture for solid-state imaging
  10. Workshop on "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice"
  11. PostDoc: Research Fellow in Signal Processing for Radio Astronomy
  12. Implementation: Iterative Reweighted Algorithms for Matrix Rank Minimization
  13. Nuit Blanche's Mailbag: Two-day workshop on Sparse Models and Machine Learning / Aptina and a Multi-Array Gigapixel camera
  14. "We generated more data than there are stars in the Universe"
  15. Richard Baraniuk's recent videos on Compressive Sensing
  16. Compressive Sensing and Advanced Matrix Factorization This Month
  17. The Accidental Single Pixel Camera
  18. Around the Blogs in 80 Summer Hours
  19. The accidental multi-aperture camera
  20. Mathematical Instruments: Nuit Blanche
  21. Beyond Sparsity: Minimum Complexity Pursuit for Universal Compressed Sensing
  22. Post Peer-Review Discussion continues and a remarkable dataset
  23. Post-doctoral position at CEA Saclay in image processing for multispectral data
  24. The most important discussion about peer-review we're not having ... until today. -updated-
  25. Implementation: Bilinear modelling via Augmented Lagrange Multipliers (BALM)
  26. A Year in Reproducible Research in Compressive Sensing, Advanced Matrix Factorization and more

Image Credit: NASA/JPL-Caltech/Malin Space Science Systems 
This image was taken by Mastcam: Left (MAST_LEFT) onboard NASA's Mars rover Curiosity on Sol 52 (2012-09-28 15:29:26 UTC) 
Full Resolution

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 28, 2012

High-accuracy wave field reconstruction using Sparse Regularization - implementations -

Following up on yesterday's entry today we have other reconstruction solvers for both phase retrieval and  ptychography. The following demos are directly downloadable from the Local Approximations in Signal and Image Processing (LASIP) website. There you can find:
  • Sparse Phase Amplitude Reconstruction (SPAR)
  • 4f SPAR phase retrieval
  • Decoupled augmented Lagrangian (D-AL)
  • Compressive sensing computational ghost imaging (CSGI)
  • Compressive ptychographical coherent diffractive imaging

Attendant documents explaining what each of these routines does is given below (except for the ptychography paper that does not seem out yet).

A novel efficient variational technique for speckle imaging is discussed. It is developed with the main motivation to filter noise, to wipe out the typical diffraction artifacts and to achieve crisp imaging. A sparse modeling is used for the wave field at the object plane in order to overcome the loss of information due to the ill-posedness of forward propagation image formation operators. This flexible and data adaptive modeling relies on the recent progress in sparse imaging and compressive sensing (CS). Being in line with the general formalism of CS, we develop an original approach to wave field reconstruction.7 In this paper we demonstrate this technique in its application for computational amplitude ghost imaging (GI), where a spatial light modulator (SLM) is used in order to generate a speckle wave field sensing a transmitted mask object.

We apply a nonlocal adaptive spectral transform for sparse modeling of phase and amplitude of a coherent wave field. The reconstruction of this wave field from complex-valued Gaussian noisy observations is considered. The problem is formulated as a multiobjective constrained optimization. The developed iterative algorithm decouples the inversion of the forward propagation operator and the filtering of phase and amplitude of the wave field. It is demonstrated by simulations that the performance of the algorithm, both visually and numerically, is the current state of the art.

Phase retrieval via spatial light modulator phase modulation in 4f optical setup: numerical inverse imaging with sparse regularization for phase and amplitude by Vladimir Katkovnik and Jaakko Astola. The abstract reads:
The 4f optical setup is considered with a wave field modulation by a spatial light modulator located in the focal plane of the first lens. Phase as well as amplitude of the wave field are reconstructed from noisy multiple-intensity observations. The reconstruction is optimal due to a constrained maximum likelihood formulation of the problem. The proposed algorithm is iterative with decoupling of the inverse of the forward propagation of the wave field and the filtering of phase and amplitude. The sparse modeling of phase and amplitude enables the advanced high-accuracy filtering and sharp imaging of the complex-valued wave field. Artifacts typical for the conventional algorithms (wiggles, ringing, waves, etc.) and attributed to optical diffraction can be suppressed by the proposed algorithm.

Sparse modeling is one of the efficient techniques for imaging that allows recovering lost information. In this paper, we present a novel iterative phase-retrieval algorithm using a sparse representation of the object amplitude and phase. The algorithm is derived in terms of a constrained maximum likelihood, where the wave field reconstruction is performed using a number of noisy intensity-only observations with a zero-mean additive Gaussian noise. The developed algorithm enables the optimal solution for the object wave field reconstruction. Our goal is an improvement of the reconstruction quality with respect to the conventional algorithms. Sparse regularization results in advanced reconstruction accuracy, and numerical simulations demonstrate significant enhancement of imaging.

In our work we demonstrate a computational method of phase retrieval realized for various propagation models. The effects, arising due to the wave field propagation in an optical setup, lead to significant distortions in measurements; therefore the reconstructed wave fields are noisy and corrupted by different artifacts (e.g. blurring, "waves" on boards, etc.). These disturbances are hard to be specified, but could be suppressed by filtering. The contribution of this paper concerns application of an adaptive sparse approximation of the object phase and amplitude in order to improve imaging. This work is considered as a further development and improvement of the variational phase-retrieval algorithm originated in 1. It is shown that the sparse regularization enables a better reconstruction quality and substantial enhancement of imaging. Moreover, it is demonstrated that an essential acceleration of the algorithm can be obtained by a commodity graphic processing unit, what is crucial for processing of large images.  

Generally, wave field reconstructions obtained by phase-retrieval algorithms are noisy, blurred and corrupted by various artifacts such as irregular waves, spots, etc. These disturbances, arising due to many factors such as non-idealities of optical system (misalignment, focusing errors), dust on optical elements, reflections, vibration, are hard to be localized and specified. It is assumed that there is a generalized pupil function at the object plane which describes aberrations in the coherent imaging system manifested at the sensor plane. Here we propose a novel two steps phase-retrieval algorithm to compensate these distortions. We first estimate the cumulative disturbance, called background, using special calibration experiments. Then, we use this background for reconstruction of the object amplitude and phase. The second part of the algorithm is based on the maximum likelihood approach and, in this way, targeted on the optimal amplitude and phase reconstruction from noisy data. Numerical experiments demonstrate that the developed algorithm enables the compensation of various typical distortions of the optical track so sharp object imaging for a binary test-chart can be achieved.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 27, 2012

Alternating Direction Methods for Classical and Ptychographic Phase Retrieval

Stefano just sent me the following:

Hi Igor,
I'd like to bring your attention to some references on phase retrieval, I hope you enjoy the links:
a presentation by Filipe Maia on Multi-GPU Real-Time Ptychographic X-ray Image Reconstruction at
some diffraction data is available here :
see also this (By Filipe Maia):
Thanks Stefano .Here are the two papers:

Abstract. In this paper, we show how augmented Lagrangian alternating direction method (ADM) can be used to solve both the classical and ptychographic phase retrieval problems. We point out the connection between ADM and projection algorithms such as the hybrid input-output (HIO) algorithm, and compare its performance against standard algorithms for phase retrieval on a number of test images. Our computational experiments show that ADM appears to be less sensitive to the choice of relaxation parameters, and it usually outperforms the existing techniques for both the classical and ptychographic phase retrieval problems.

Ptychography is a popular technique to achieve diffraction limited resolution images of a two or three dimensional sample using high frame rate detectors. We introduce a relaxation of common projection algorithms to account for instabilities given by intensity and background fluctuations, position errors, or poor calibration using multiplexing illumination. This relaxation introduces an additional phasing optimization at every step that enhances the convergence rate of common projection algorithms. Numerical tests exhibit the exact recovery of the object and the noise when there is high redundancy in the data.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, September 26, 2012

From Compressive Sensing to Machine Learning

If you recall the history of compressive sensing goes like this ( I know it is a little bit more complex but this is the gist), the Donoho-Tanner phase transition for sparse recovery is slowly eroding through additional knowledge of the measurement matrix or the signal or both. This was the story until a year ago:

In the same way I called these results stunning, the following figures are equally stunning

Because it shows that, unlike previous results, knowing the structure on the prior knowledge of the unknown vector yields similar results as designing a specific measurement matrix. This is new and it opens the door to a closer connection between compressive sensing and machine learning. The results above are from [1].They are very interesting but something was bothering me about them, so I asked Zhilin about it:
Dear Zhilin, 
I am surprised by the graphs of figure 1 of version 2 of this paper (  Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation by Zhilin Zhang, Bhaskar D. Rao. )[ NFM: That graph was not in version 1 ]  I am surprised not because they reach the rho=1 limit but because there is no difference between the no correlation and near full correlation between blocks. While the non correlation brings a surprise (because you now do as good as Krzakala et al), I would have have expected a much bigger surprise for the 95% correlation where the phase transition would have been **above** rho =1. What do you think ?....

The idea here is that with structured sparsity, you get to know more about the inter-dependencies between elements of the signals. Hence you ought to have some results that shows that reconstruction is possible with a lower  number of samples compared to the number of elements in the signal (rho = 1).  Zhilin kindly responded with 
...But no matter what's the intra-block correlation, once rho is above 1, the success rate of BSBL algorithms is almost 0 (a success is defined as the normalized MSE less than 10^{-5}). But if a success is defined as MSE less than  10^{-3} or similar values, then the success rate is large even for rho much higher than 1, as shown in my work on fetal ECG recovery and EEG recovery....
[my emphasis]

In short, if the figure of merit changes the reconstruction figure also changes dramatically. And then comes nonlinear compressive sensing using techniques from Machine Learning. How much more of an improvement can we get ? Let me guess ....

... what about ten times! (from [2]) and more [3]


Tuesday, September 25, 2012

Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardware

Sometimes you focus on developing hardware, sometimes, you work around the hardware, Zhilin Zhang sent me the following:

Dear Igor, 

...Thank you for your blog, which plays an important role in my research. Recently we have a paper accepted by IEEE Trans.on Biomedical Engineering. The title is "Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardware". The link is here: The code (with data) has been incorporated into the BSBL package ( In this work we apply BSBL to wireless telemonitoring of EEG. Although EEG is not sparse, the recovery quality by BSBL is satisfactory, which can meet the requirement of regular EEG analysis. This preliminary work is our first step toward smart-phone based brain-computer interface.  I very appreciate if you can announce the paper in your blog.

Best regards,
[my emphasis] 

Telemonitoring of electroencephalogram (EEG) through wireless body-area networks is an evolving direction in personalized medicine. Among various constraints in designing such a system, three important constraints are energy consumption, data compression, and device cost. Conventional data compression methodologies, although effective in data compression, consumes significant energy and cannot reduce device cost. Compressed sensing (CS), as an emerging data compression methodology, is promising in catering to these constraints. However, EEG is non-sparse in the time domain and also non-sparse in transformed domains (such as the wavelet domain). Therefore, it is extremely difficult for current CS algorithms to recover EEG with the quality that satisfies the requirements of clinical diagnosis and engineering applications. Recently, Block Sparse Bayesian Learning (BSBL) was proposed as a new method to the CS problem. This study introduces the technique to the telemonitoring of EEG. Experimental results show that its recovery quality is better than state-of-the-art CS algorithms, and sufficient for practical use. These results suggest that BSBL is very promising for telemonitoring of EEG and other non-sparse physiological signals.
and the attendant code.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, September 24, 2012

Passive millimeter-wave imaging with compressive sensing

Here is another CS hardware instance:

Passive millimeter-wave imaging with compressive sensing by Nachappa GopalsamiShaolin Liao, Thomas W. Elmer, Eugene R. Koehl, Alexander Heifetz, Apostolos C. Raptis, Leonidas Spinoulas, Aggelos K. Katsaggelos. The abstract reads:

Passive millimeter-wave (PMMW) imagers using a single radiometer, called single pixel imagers, employ raster scanning to produce images. A serious drawback of such a single pixel imaging system is the long acquisition time needed to produce a high-fidelity image, arising from two factors: (a) the time to scan the whole scene pixel by pixel and (b) the integration time for each pixel to achieve adequate signal to noise ratio. Recently, compressive sensing (CS) has been developed for single-pixel optical cameras to significantly reduce the imaging time and at the same time produce high-fidelity images by exploiting the sparsity of the data in some transform domain. While the efficacy of CS has been established for single-pixel optical systems, its application to PMMW imaging is not straightforward due to its (a) longer wavelength by three to four orders of magnitude that suffers high diffraction losses at finite size spatial waveform modulators and (b) weaker radiation intensity, for example, by eight orders of magnitude less than that of infrared. We present the development and implementation of a CS technique for PMMW imagers and shows a factor-of-ten increase in imaging speed.

There is another similar paper here: behind a paywall:

In this paper, we briefly describe a single detector passive millimeter-wave imaging system, which has been previously presented. The system uses a cyclic sensing matrix to acquire incoherent measurements of the observed scene and then reconstructs the image using a Bayesian approach. The cyclic nature of the sensing matrix allows for the design of a single unified and compact mask that provides all the required random masks in a convenient way, such that no mechanical mask exchange is needed. Based on this setup, we primarily propose the optimal adaptive selection of sampling submasks out of the full cyclic mask to obtain improved reconstruction results. The reconstructed images show the feasibility of the imaging system as well as its improved performance through the proposed sampling scheme.

Saturday, September 22, 2012

Sunday Morning Insight: QTT format and the TT-Toolbox -implementation-

It must already be Sunday Morning somewhere. This one has been bugging me for a little while. What is the relationship between
  • This question on LinkedIn: Is there a two-dimensional equivalent of compressed sensing?
  • The use of Matlab's  reshape function to vectorize images in simple compressive sensing
  • Putting in different columns the different frames of a video in order to perform Robust PCA
  • Group Sparsity
  • Low Rank SVD
  • Graph wavelets as dictionaries
  • Analysis operator leanring
  • Recommender systems
  • The need for decomposiing a measurement matrix into a series of matrices with specific features
  • hyperspectral measurements
  • Super Fast FFTs ?
Those were all issues that popped up on my radar screen this week and I think the beginning of a good answer is the quantized tensor train (QTT) format. Say what now ?

Two presentations might be of interest for a bird's eye view

and several attendant papers:

Let us note that the Full-to-TT compression, the TTε recompression and the TT–rounding algorithms make heavy use of SVD and QR algorithms for matrices.which would seem to be a good candidate for insertion of robust PCA and randomized PCA work. There is also a toolbox to dwell into it:

TT-Toolbox 2.2.

TT-Toolbox (TT=Tensor Train) Ver­sion 2.2
TT(Tensor Train) for­mat is an effi­cient way for low-parametric rep­re­sen­ta­tion of high-dimensional ten­sors. The TT-Toolbox is a MATLAB imple­men­ta­tion of basic oper­a­tions with
ten­sors in TT-format. It includes:
  • tt_tensor and tt_matrix classes for stor­ing vec­tors and oper­a­tors
  • Basic lin­ear alge­bra sub­rou­tines (addi­tion, matrix-by-vector prod­uct, ele­men­t­wise mul­ti­pli­ca­tion and many oth­ers) using stan­dard MATLAB syn­tax, lin­ear com­plex­ity in the dimen­sion, reshape func­tion
  • Fast round­ing pro­ce­dure with a pre­scribed accu­racy
  • Advanced approx­i­ma­tion and solu­tion techniques:
  • Approx­i­mate solu­tion of lin­ear sys­tems and eigen­value prob­lems
  • Cross meth­ods to approx­i­mate “black-box” ten­sors
  • Wavelet ten­sor train decomposition
  • Con­struc­tion of basic oper­a­tors and func­tions (Laplace oper­a­tor, func­tion of a TT-tensor)
  • Com­pu­ta­tion of max­i­mal and min­i­mal ele­ments of a ten­sor and sev­eral others
 New in Ver­sion 2.2
  • Bet­ter doc­u­men­ta­tion
  • Mixed QTT-Tucker for­mat (qtt_tucker class)
  • reshape func­tion for a TT-tensor/TT-matrix
  • dmrg_cross method for black-box ten­sor approx­i­ma­tion
  • Con­vo­lu­tion in QTT-format

Thanks Laurent for bringing this to my attention.

Friday, September 21, 2012

Another Super Fast FFT

We propose Fourier transform algorithms using QTT format for data-sparse approximate representation of one- and multi-dimensional vectors (m-tensors). Although the Fourier matrix itself does not have a low-rank QTT representation, it can be efficiently applied to a vector in the QTT format exploiting the multilevel structure of the Cooley-Tukey algorithm. The m-dimensional Fourier transform of an n×⋯×n vector with n=2 d has O(m d2 R3)complexity, where R is the maximum QTT-rank of input, output and all intermediate vectors in the procedure. For the vectors with moderate R and large n and m the proposed algorithm outperforms the O(nm logn) fast Fourier transform (FFT) algorithm and has asymptotically the same log-squared complexity as the superfast quantum Fourier transform (QFT) algorithm. By numerical experiments we demonstrate the examples of problems for which the use of QTT format relaxes the grid size constrains and allows the high-resolution computations of Fourier images and convolutions in higher dimensions without the ‘curse of dimensionality’. We compare the proposed method with Sparse Fourier transform algorithms and show that our approach is competitive for signals with small number of randomly distributed frequencies and signals with limited bandwidth.
No toolbox available. An intriguing aspect of this paper aside from providing a competitor to sFFT is the QTT format mentioned. More on that later.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressed Sensing for the multiplexing of PET detectors

Following up on yesterday's paper and one of the author that had already shown up my radar screen earlier this summerCraig Levin forwarded me their use of compressive sensing that quite simply could replace Anger Logic. This is big news.

Here is the paper: Compressed Sensing for the multiplexing of PET detectors by Peter Olcott , Garry Chinn and Craig Levin . The abstract reads:

Compressed sensing can be used to multiplex a large number of individual readout sensors to significantly reduce the number of readout channels in a large area PET block detector. The compressed sensing framework can be used to treat PET data acquisition as a sparse readout problem and achieve sub-Nyquist rate sampling, where the Nyquist rate is determined by the pixel pitch of the individual SiPM sensors. The sensing matrix is fabricated by using discrete elements or wires that uniquely connect pixels to readout channels. By analyzing the recorded magnitude on several ADC channels, the original pixel values can be recovered even though they have been scrambled through a sensing matrix. In a PET block detector design comprising 128 SiPM pixels arranged in a 16 x 8 array, compressed sensing can provide higher multiplexing ratios (128:16) than Anger logic (128:32) or Cross-strip readout (128:24) schemes while resolving multiple simultaneous hits. Unlike Anger and cross-strip multiplexing, compressed sensing can recover the positions and magnitudes of simultaneous, multiple pixel hits. Decoding multiple pixel hits can be used to improve the positioning of events in light-sharing designs, inter-crystal scatter events, or events that pile up in the detector. A Monte-carlo simulation of a compressed sensing multiplexed circuit design for a 16 x 8 array of SiPM pixels was done. Noise sources from the SiPM pixel (dark counts) and from the readout channel (thermal noise) were included in the simulation. Also, two different crystal designs were simulated, a 1x1 coupled design with 128 scintillation crystals, and a 3:2 light-sharing design with 196 crystals. With an input SNR of 37dB (experimentally measured from a single SiPM pixel), all crystals were clearly decoded by the compressed sensing multiplexing with a decoded SNR of the sum signal a 30:6 0:1 dB SNR for the one-to-one coupling, and 26:1 0:1 dB three-to-two coupling. For a 10% energy resolution, and SNR of greater than 20 dB is needed to accurately recover the energy.

This is a very nice paper. Let us note a few things. Their measurement matrix is very sparse and different from the ones we have heard about so far. I note also that in the extreme noiseless case, using the Donoho-Tanner phase transition, a 128x16 measurement matrix like the one implemented here , we therefore have N = 128, m = 16 or delta = 0.125 and using Jared Tanner's app, we can estimate the optimal number of simultaneous detectable events to be about 0.26*16 (Weak Simplex) or about 4. In the noisy case, we should expect a number lower than four. 

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 20, 2012

A single-photon sampling architecture for solid-state imaging

This issue of defining compressive sensing to be not this or not that, is interesting because it shows compressive sensing to be some sorts of direct consequence of a well studied and known subject area. Yes Dictionary learning is not specific to compressive sensing. Yes, image reconstruction is not compressive sensing specific. Yes feature learning (aka signal manifold processing) is not compressive sensing specific. We all agree on that because we know deep down that the tools developed in those areas are an opportunity to make our job faster. Every once in while though, we are reminded that compressive sensing is not just part of the back end process arising in signal processing or machine learning but that it can take the front seat when it comes to actual physical signals acquisition. Many of the instances listed in the compressive sensing hardware list still however rely on off the shelf electronics and few (like this or this instance) look into changing the architecture. Today we have another instance of a changing architecture in the paper that follows. Let us note in the meantime that when looking at high energy photons like Gamma rays, nobody is messing with the Anger Logic yet

Which brings us to today's paper introduced by Ewout van den Berg :

Dear Igor,
It's a pleasure to announce our new paper called "A single-photon sampling architecture for solid-state imaging sensors". The paper presents an architecture for silicon photomultiplier sensor arrays designed to determine with high accuracy the time and location of each detected photon. The design exploits the fact that the photon arrival on the sensor is temporally sparse, especially when the photon flux is low and sampling is done over sufficiently short time intervals. At the first glance this seems to be an ideal setting for compressed sensing. However, given the binary nature of the signals and the high temporal resolution desired, a perfect fit is found instead in group testing. Our design uses group-testing based interconnection networks to connect subsets of pixels to time-to-digital converters (TDCs), which record a time stamp to memory whenever an event is detected on their input during a sampling interval. The paper gives detailed constructions of efficient group-testing designs that guarantee fast and unique recovery of signals up to a certain sparsity level. We compare the number of TDCs used in these designs with theoretical upper and lower bounds on the minimum number of TDCs required. Finally, we show the efficacy of the design based on realistic simulations of scintillation events in clinical positron emission tomography.
The paper is available on arXiv at
Best regards,
Thanks  Ewout  ! I note that some of the authors have already shown up on my radar screen earlier this summer

Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.

Image Credit: NASA/JPL/Space Science Institute
W00075464.jpg was taken on September 17, 2012 and received on Earth September 17, 2012. The camera was pointing toward SATURN at approximately 1,494,101 miles (2,404,522 kilometers) away, and the image was taken using the CB2 and CL2 filters. 

Wednesday, September 19, 2012

Workshop on "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice"

Last year, I wondered publicly how fast they'll join our side of the Force... the "they" referred to the folks behind the fantastic infrastructure of the internet and scientific progress in the past 30 years i.e. the people behind the Numerical Linear Algebra  packages called( LAPACK/LINPACK/EISPACK).  Like everybody else who can reasonably look into the future ( see Predicting the Future: The Steamrollers and Predicting the Future: Randomness and Parsimony) they and others are coming to the realization that randomization is important as problem sizes are becoming huge as opposed to just large. If you really want to know more about the subject, I invite you to read about Streaming Data Mining. In the meantime, there is growing crowd of folks who want to make this subject more widespread. I suggested naming this RP-Lapack a long while back, but it looks like we have a better contender in RandNLA from the people actually doing the work :-) Haim Avron, being one of them, just sent me the following announcement (the synopsis is very telling)

Dear Igor, 
I am organizing with Christos Boutsidis and Petros Drineas a one day workshop on randomized algorithms in numerical linear algebra. I am attaching the announcement. More information, including the list of confirmed speakers and abstracts can be found at Would it be possible to post the announcement on your blog?
Thanks in advance,

Sure  Haim, Here is the announcement:

Workshop on "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice"
(Under the auspices of FOCS 2012, the 53rd Annual IEEE Symposium on Foundations of Computer Science)
Hyatt Regency, New Brunswick NJ, USA
October 20, 2012
Matrix algorithms are the foundation for many methods in data analysis, scientific computing, and engineering applications, and Numerical Linear Algebra (NLA) is the key discipline enabling the analysis and implementation of such algorithms. Without highly efficient and scalable NLA algorithms, the challenge of big data will not be met. To meet this challenge a true boost in performance is neccessary, and that necessitates new algorithmic paradigms to be developed, analyzed, and deployed. Randomization is one of only a handful of paradigms that have the potential to deliver the desired true boost in performance. In the past, scientists and application users were distrustful and shied away from randomized numerical linear algebra algorithms, because (i) their output is unpredictable, (ii) it may not be reproducible, (iii) the error analysis is probabilistic, and (iv) the obtained accuracy is very crude. However, the NLA community is now seriously considering the use of "radical" ideas like randomization, and, indeed, recent years saw much research on randomized numerical linear algebra algorithms.
These practical developments represent huge opportunities for the theoretical computer science community: how can randomization and sampling be leveraged in order to design faster numerical algorithms? The goal of the workshop is to expose the participants to recent progress on developing randomized numerical linear algebra algorithms, as well as on the application of such algorithms to a variety of disciplines and domains.

More information, including list of confirmed speakers and schedule:

Haim Avron (, Christos Boutsidis (, and Petros Drineas (

Image Credit: NASA/JPL-Caltech/Malin Space Science Systems 
This image was taken by Mastcam: Left (MAST_LEFT) onboard NASA's Mars rover Curiosity on Sol 41 (2012-09-17 07:18:22 UTC) . 
Full Resolution

Tuesday, September 18, 2012

PostDoc: Research Fellow in Signal Processing for Radio Astronomy

Yves Wiaux just sent me the following:

Hi Igor, 
May I ask you to post on nuit blanche the new ad below for a postdoc in Southampton with A Scaife, in close collaboration with my group at EPFL?
Thanks a lot in advance 

Sure Yves. Here it is.

Research Fellow in Signal Processing for Radio Astronomy
The Astronomy Group at the University of Southampton has an opening for a postdoctoral researcher to become part of the ERC funded LODESTONE project. The LODESTONE project unifies the polarization key science projects from multiple next generation radio telescopes to investigate the origin and evolution of comic magnetic fields in large-scale structure. In this context, the project will also put an accent on the development of advanced signal processing and imaging approaches for radio astronomy.
The successful applicant will work with Dr Anna Scaife (Southampton) and Dr Yves Wiaux (EPFL) on the development of such novel signal processing and imaging approaches (inverse problems, sparsity, sampling, reconstruction, statistical analysis, etc.) for radio astronomy. Candidates are expected to have a background in signal processing or applied statistical analysis, and a sound interest in astronomy. Applied experience in radio astronomy, astrophysics or cosmology will be considered advantageous.
The initial appointment is for 2 years, renewable on the basis of initial achievements. A PhD in maths, physics or a related field is required. Equipment and travel funding (in particular for collaboration with the Biomedical and Astrophysical Signal Processing (BASP) group of Dr Wiaux) is available.
Interested candidates should send a CV, publication list, a brief research statement, and three letters of recommendation by October 31st 2012 to Mrs Ceris French ( and complete the online application at The position is available from January 1st 2013. General questions may be sent by email
The closing date for this post is 31st October 2012. Please quote vacancy reference number 159812WF on all correspondence.

Monday, September 17, 2012

Implementation: Iterative Reweighted Algorithms for Matrix Rank Minimization

Slowly but surely some of the vectorial examples that used to fit the compressed sensing framework are bound to cross over to rank minimization. Today, we have an example of that in Iterative Reweighted Algorithms for Matrix Rank Minimization by Karthik MohanMaryam Fazel. The abstract reads:
The problem of minimizing the rank of a matrix subject to affine constraints has many applications in machine learning, and is known to be NP-hard. One of the tractable relaxations proposed for this problem is nuclear norm (or trace norm) minimization of the matrix, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, i.e., recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLS-p shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set

The attendant code implementing thie IRLS approach to rank minimization is here.

Image Credit: NASA/JPL-Caltech/Malin Space Science Systems 
This image was taken by Mastcam: Left (MAST_LEFT) onboard NASA's Mars rover Curiosity on Sol 39 (2012-09-15 03:26:04 UTC) . 

Friday, September 14, 2012

Nuit Blanche's Mailbag: Two-day workshop on Sparse Models and Machine Learning / Aptina and a Multi-Array Gigapixel camera

Cedric Herzet just sent me the following:

Dear Igor,

I'm organizing with a few other researchers (Remi Gribonval, Aline Roumy and Frederic Bimbot) from the IRISA research center (Rennes, France), a two-day workshop dedicated to parsimony and learning.
Different speakers active in the field will be invited to give tutorial talks. The program of these 2 days is in attachment.
Would it be possible to post the announcement on your blog?

Thank you in advance for your help,
Cedric Herzet
Thanks Cedric. From the program :

Two day workshop on Sparse Models and Machine Learning
1516 Octobre 2012
IRISA, Campus de Beaulieu, Rennes, France

Sparse representations and models have become a central concept in the field of signal processing and data modeling and are gradually building bridges with the area of machine learning. In parallel, these concepts are becoming operative in a number of applicative fields, such as brain imaging, distributed communications, multimedia, etc...This twoday workshop is intended to offer an overview of the current status of this stimulating field and to discuss very exciting focal points, where sparse representations, machine learning and applications are converging to new fundamental and practical paradigms.
The two days are articulated around 4 tutorial talks, 3 applicative focus and 2 open discussions (see complete programme below) presented by colleagues currently at the forefront of the field. The workshop is opened to academy and industry specialists, but also to PhD students, postdoc and colleagues from neighboring domains, who want to deepen their understanding of the field and its current theoretical and practical challenges.
The registration is free but mandatory. Please, contact and

Mon 15 Oct
  • 10h45-12h15 Tutorial 1  Remi Gribonval, Inverse problems and sparse models (1/2)
  • 12h15-14h00 Lunch
  • 14h00-15h30 Tutorial 2 Francis Bach, Structured sparsity and convex optimization (1/2)
  • 15h30-15h45 Coffee Break
  • 15h45-16h45 Applications (warm up) Pierre Vandergheynst, Emerging applications of sparse representations
  • 16h45-17h30 Open discussion 1 Lead by P. Vandergheynst What can be expected from sparse models in 
  • addressing new applicative problems
Tue 16 Oct
  • 9h00-10h30 Tutorial 3  Remi Gribonval  Inverse problems and sparse models (2/2)
  • 10h30-10h45 Coffee break
  • 10h45-11h45 Applicative focus 1 Enrico Magli, Compressed sensing for distributed communications
  • 12h45-14h00 Lunch
  • 14h00-15h30 Tutorial 4  Francis Bach , Structured sparsity and convex optimization (2/2)
  • 15h30-16h15 Open discussion 2 Lead by R. Gribonval & F. Bach Bridging the gap between sparse methods and machine learning

In a somewhat different direction, Adi Shavit pointed me to this entry from Vladimir Koifman's blog on the fact that Aptina Invests into Multi-Array Gigapixel Future. It turns out as Adi showed me that it is none other than the gigapixel array of David Brady's project we featured Video: David Brady's "Coding for Multiplex Optical Imagers" lecture. Thanks  Adi

Credit: Produced by Bard Canning.