Monday, April 30, 2012

Iterative Estimation of Constrained Rank-One Matrices in Noise (implementation)

So the Approximate Message Passing algorithm is taking over the world, slowly but surely (thank you Emtiyaz for the tutorial report of Arian Maleki's thesis) . Here is an implementation for Low Rank factorization using the GAMP package, hence this is a paper with an implementation: Iterative Estimation of Constrained Rank-One Matrices in Noise by Sundeep Rangan, Alyson Fletcher. The abstract reads:

We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations. The method is similar to approximate message passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the estimates from the proposed iterative procedure is described by a simple scalar equivalent model, where the distribution of the estimates is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed method thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings.
Let us not forget that while rank-1 matrices seem to be a small problem it is at the heart of some optics problems with quadratic compressed sensing. (pdf is here)From the GAMP Package site:

Uses the GAMP package for rank one matrix factorization. The paper shows that you can obtain an exact state evolution analysis for predicting the GAMP performance. In the future, we hope to extend this to higher ranks.
Code is in the directory gampmatlab\trunk\code\matrixFactor. 


Image Credit: NASA/JPL/Space Science Institute
W00074018.jpg was taken on April 23, 2012 and received on Earth April 24, 2012. The camera was pointing toward SATURN-ERING at approximately 2,837,370 kilometers away, and the image was taken using the CL1 and CL2 filters. 

MALSAR (Multi-tAsk Learning via StructurAl Regularization) package


Timelapse: EARTHEREAL from Adonis Pulatus on Vimeo.



I just came across the following new package MALSAR (Multi-tAsk Learning via StructurAl Regularization) by Jiayu Zhou, Jianhui Chen, Jieping Ye. Here is a small description: :


The MALSAR (Multi-tAsk Learning via StructurAl Regularization) package includes the following multi-task learning algorithms:
  • Mean-Regularized Multi-Task Learning
  • Multi-Task Learning with Joint Feature Selection
  • Robust Multi-Task Feature Learning
  • Trace-Norm Regularized Multi-Task Learning
  • Alternating Structural Optimization
  • Incoherent Low-Rank and Sparse Learning
  • Robust Low-Rank Multi-Task Learning
  • Clustered Multi-Task Learning
  • Multi-Task Learning with Graph Structures

    Manual


    The manual for the lastest version can be found here. The manual is also included in the MANUAL folder of the MALSAR package.
    Tutorial


    We gave a tutorial on multi-task learning at the Twelfth SIAM Internation conference on Data Mining (SDM'12). The tutorial slides can be downloaded here.


The webpage is here. The package will soon be listed in the Big Picture page on Compressive Sensing and in the Matrix Factorization page..




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, April 29, 2012

Nuit Blanche entries that gathered the most Google +1s so far

Since the beginning of the year, the following entries gathered the following number of Google +1



+3

Just on the right side of impossible (part 3)

In Just on the right side of Impossible (part 2), I mentioned several ideas that I think could be ideas for frighteningly ambitious start-ups. Three items showed up on my radar this week related to some of these ideas, here they are:


Communicating with Animals :


New Types of Sensors:

Friday, April 27, 2012

CAI: A Glimpse of Lana and Robust PCA

It's Friday, It's Cable And Igor's adventures in Matrix Factorization, Today, Cable And I are going to play with two scenes. A static scene featuring a wall made of drawings and a video of Lana Del Rey on top of that background, watch it first we'll wait and join us below.







The idea here is to project this video on a wall that is very well lit with a nonuniform background as follows:






and see if by performing a Robust PCA decomposition using Godec, one can deconvolute the background from the much fainter video:

 



Mission semi-accomplished! Our decomposition for the noise (right bottom) and sparse (left bottom) terms are in the canonical basis and therefore much of the video lands in the noise section. One would be expecting it to land in the sparse section. We could probably have gotten that result, but call us snobs, we're a bit partial to Lana as opposed to a canonical basis. One wonders if by using a dictionary, the sparse section could not grab entirely Lana's video clip. We'll need to investigate this further. The Low Rank decomposition (right top video) removed the video entirely... so that is good.



Generalizing Compressed Sensing: GraSP implementation available




Through his Twitter feed, Petros Boufounos let us know that:

GraSP code is out: http://t.co/QnVvUfZo Go play! Here is the paper:http://t.co/dsC919S2
In plain english, the site is here. And starts with:


Gradient Support Pursuit (GraSP)
This algorithm can be used as an approximate solver for sparsity constrained optimization problems. The algorithm generalizes the CoSaMP algorithm used in Compressed Sensing, and can handle cost functions that are non-quadratic. 


The attendant supporting  presentation is hereAs a reminder, the paper is:Greedy Sparsity-Constrained Optimization by Sohail Bahmani, Petros Boufounos, and Bhiksha Raj.. The abstract reads:
Sparsity-constrained optimization has wide applicability in machine learning, statistics, and signal processing problems such as feature selection and Compressive Sensing. A vast body of work has studied the sparsity-constrained optimization from theoretical, algorithmic, and application aspects in the context of sparse estimation in linear models where the fidelity of the estimate is measured by the squared error. In contrast, relatively less effort has been made in the study of sparsity-constrained optimization in cases where nonlinear models are involved or the cost function is not quadratic. In this paper we propose a greedy algorithm, Gradient Support Pursuit (GraSP), to approximate sparse minima of cost functions of arbitrary form. Should a cost function have a Stable Restricted Hessian (SRH) or a Stable Restricted Linearization (SRL), both of which are introduced in this paper, our algorithm is guaranteed to produce a sparse vector within a bounded distance from the true sparse optimum. Our approach generalizes known results for quadratic cost functions that arise in sparse linear regression and Compressive Sensing. We also evaluate the performance of GraSP through numerical simulations on synthetic data, where the algorithm is employed for sparse logistic regression with and without $\ell_2$-regularization.



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, April 26, 2012

GraphLab 2012 Workshop registration and other random items.

Danny Bickson just sent me the following about regsitration for the upcoming GraphLab workshop (see GraphLab workshop, Why should you care ?A small Q&A with Danny Bickson on GraphLab ).:

Hi, 
As you may be aware, based on your feedback, we setup the workshop data to Monday July 9th in San Francisco. The workshop will take place one day before Stanford MMDS workshop, thus allowing people to attend both workshops.

We have finally setup the workshop website: http://graphlab.org/workshop2012/. Early registration is now open.

At this point, we would like your help in disseminating the news about the event to anyone relevant.

Thanks a lot for your great help! Looking forward to meeting everyone in person at the workshop!

Dr. Danny Bickson
Project Scientist, Machine Learning Dept.
Carnegie Mellon University
As I mentioned to one of you recently, I have scheduling conflicts and won't probably attend but I think it is a good opportunity to invest in a plane ticket and attend. It would even be better if you could have a presentation or a poster. Let me or Danny know if you have a poster or a presentation idea in mind. The presentation and posters currently on the program are listed here. The companies represented there are listed here and here. (of interest: GAMP: Generalized Approximate Message Passing in GraphLab )


Yesterday's mention of the paper that appeared in  Nature: Faster STORM using compressed sensing by Lei Zhu, Wei Zhang, Daniel Elnatan & Bo Huang and for which there is no preprint available, got me to find these two items:

* One on PSF engineering and STORM. Eventually some of that has to spill to compressive sensing methods Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions by Sean QuirinSri Rama Prasanna Pavani, and Rafael Piestun. The abstarct reads:

Photo-activation localization microscopy is a far-field superresolution imaging technique based on the localization of single molecules with subdiffraction limit precision. Known under acronyms such as PALM (photo-activated localization microscopy) or STORM (stochastic optical reconstruction microscopy), these techniques achieve superresolution by allowing only a sparse, random set of molecules to emit light at any given time and subsequently localizing each molecule with great precision. Recently, such techniques have been extended to three dimensions, opening up unprecedented possibilities to explore the structure and function of cells. Interestingly, proper engineering of the three-dimensional (3D) point spread function (PSF) through additional optics has been demonstrated to theoretically improve 3D position estimation and ultimately resolution. In this paper, an optimal 3D single-molecule localization estimator is presented in a general framework for noisy, aberrated and/or engineered PSF imaging. To find the position of each molecule, a phase-retrieval enabled maximum-likelihood estimator is implemented. This estimator is shown to be efficient, meaning it reaches the fundamental Cramer–Rao lower bound of x, yand z localization precision. Experimental application of the phase-retrieval enabled maximum-likelihood estimator using a particular engineered PSF microscope demonstrates unmatched low-photon-count 3D wide-field single-molecule localization performance.




* And the other one by Roarke Horstmeyer on Measuring Ultrafast Pulses: Frequency Resolved Optical Gating and Phase Space Functions, 6.638 Ultrafast Optics Final Project, Dec. 2010 . At the very end, NMF is used to perform a FROG deconvolution for certain types of pulses. We've talked about FROG before (see also Shouldn't We Call It "Sparse Sensing" or "Sparsity Sensing" instead ? )
.





Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, April 25, 2012

Compressive Sensing This Week

Through the Google Summer of Code 2012Sergey Lisitsyn will port SLEP to Shogun ( A Large Scale Machine Learning Toolbox). SHOGUN is implemented in C++ and interfaces to Matlab(tm), R, Octave and Python.





The following paper just appeared in Nature: Faster STORM using compressed sensing by Lei Zhu, Wei ZhangDaniel Elnatan & Bo Huang. The abstract reads:\
In super-resolution microscopy methods based on single-molecule switching, the rate of accumulating single-molecule activation events often limits the time resolution. Here we developed a sparse-signal recovery technique using compressed sensing to analyze images with highly overlapping fluorescent spots. This method allows an activated fluorophore density an order of magnitude higher than what conventional single-molecule fitting methods can handle. Using this method, we demonstrated imaging microtubule dynamics in living cells with a time resolution of 3 s.


Hardware-Efficient Random Sampling of Fourier-Sparse Signals by Patrick Maechler, Norbert Felber, and Hubert KaeslinAndreas Burg. The abstract reads:
Spectrum sensing, i.e. the identification of occupied frequencies within a large bandwidth, requires complex sampling hardware. Measurements suggest that only a small fraction of the available spectrum is actually used at any time and place, which allows a sparse characterization of the frequency domain signal. Compressed sensing (CS) can exploit this sparsity and simplify measurements. We investigate the performance of a very simple hardware architecture based on the slope analogto-digital converter (ADC), which allows to sample signals at unevenly spaced points in time. CS algorithms are used to identify the occupied frequencies, which can be continuously distributed across a large bandwidth.



Emily Allstot let me know of her paper: Compressed Sensing System Considerations for ECG and EMG Wireless Biosensors by Dixon, A. M. R. Allstot, E. G. ; Gangopadhyay, D. ; Allstot, D. J. . The abstract reads:
Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

Here is a good news-bad news story:

  • The bad news? testing RIP is tougher than deciding P vs NP (see paper below).
  •  The good news? we don't care about RIP.

This paper is concerned with an important matrix condition in compressed sensing known as the restricted isometry property (RIP). We demonstrate that testing whether a matrix satisfies RIP is hard for NP under randomized polynomial-time reductions. Our reduction is from the NP-complete clique decision problem, and it uses ideas from matroid theory. As a consequence of our result, it is impossible to efficiently test for RIP provided NP \not\subseteq BPP, an assumption which is slightly stronger than P \neq NP.


Abstract. Consider a dataset of vector-valued observations that consists of a modest number of noisy inliers, which are explained well by a low-dimensional subspace, along with a large number of outliers, which have no linear structure. This work describes a convex optimization problem, called reaper, that can reliably t a low-dimensional model to this type of data. The paper provides an effi cient algorithm for solving the reaper problem, and it documents numerical experiments which con rm that reaper can dependably nd linear structure in synthetic and natural data. In addition, when the inliers are contained in a low-dimensional subspace, there is a rigorous theory that describes when reaper can recover the subspace exactly.


In this paper, we present a probabilistic analysis of iterative node-based verification-based (NB-VB) recovery algorithms over irregular graphs in the context of compressed sensing. Verification-based algorithms are particularly interesting due to their low complexity (linear in the signal dimension $n$). The analysis predicts the average fraction of unverified signal elements at each iteration $\ell$ where the average is taken over the ensembles of input signals and sensing matrices. The analysis is asymptotic ($n \rightarrow \infty$) and is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate. This allows us to design irregular sensing graphs for such recovery algorithms. The designed irregular graphs outperform the corresponding regular graphs substantially. For example, for the same recovery complexity per iteration, we design irregular graphs that can recover up to about 40% more non-zero signal elements compared to the regular graphs. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of $n$.

Matching Pursuits are very popular in Compressed Sensing for sparse signal recovery. Though many of the Matching Pursuits possess elegant theoretical guarantees for performance, it is well known that their performance depends on the statistical distribution of the non-zero elements in the sparse signal. In practice, the distribution of the sparse signal may not be known {\em a priori}. It is also observed that performance of Matching Pursuits degrades as the number of available measurements decreases from a threshold value which is method dependent. To improve the performance in these situations, we introduce a novel fusion framework for Matching Pursuits and also propose two algorithms for sparse recovery. Through Monte Carlo simulations we show that the proposed schemes improve sparse signal recovery in clean as well as noisy measurement cases.

Estimating Unknown Sparsity in Compressed Sensing by Miles E. Lopes. The abstract reads:
Within the framework of compressed sensing, many theoretical guarantees for signal reconstruction require that the number of linear measurements $n$ exceed the sparsity ||x||_0 of the unknown signal x\in\R^p. However, if the sparsity ||x||_0 is unknown, the choice of $n$ remains problematic. This paper considers the problem of estimating the unknown degree of sparsity of $x$ with only a small number of linear measurements. Although we show that estimation of ||x||_0 is generally intractable in this framework, we consider an alternative measure of sparsity s(x):=\frac{\|x\|_1^2}{\|x\|_2^2}, which is a sharp lower bound on ||x||_0, and is more amenable to estimation. When $x$ is a non-negative vector, we propose a computationally efficient estimator \hat{s}(x), and use non-asymptotic methods to bound the relative error of \hat{s}(x) in terms of a finite number of measurements. Remarkably, the quality of estimation is \emph{dimension-free}, which ensures that \hat{s}(x) is well-suited to the high-dimensional regime where n<

Fast thresholding algorithms with feedbacks for sparse signal recovery by Tiebin Mi, Shidong Li, Yulong Liu. The abstract reads:
We provide another framework of iterative algorithms based on thresholding, feedback and null space tuning for sparse signal recovery arising in sparse representations and compressed sensing. Several thresholding algorithms with various feedbacks are derived, which are seen as exceedingly effective and fast. Convergence results are also provided. The core algorithm is shown to converge in finite many steps under a (preconditioned) restricted isometry condition. Numerical studies about the effectiveness and the speed of the algorithms are also presented. The algorithms are seen as particularly effective for large scale problems.


Consider an n−dimensional linear system where it is known that there are at most k less than n non-zero components in the initial state. The observability problem, that is the recovery of the initial state, for such a system is considered. We obtain sufficient conditions on the number of the available observations to be able to recover the initial state exactly for such a system. Both deterministic and stochastic setups are considered for system dynamics. In the former setting, the system matrices are known deterministically, whereas in the latter setting, all of the matrices are picked from a randomized class of matrices. The main message is that, one does not need to obtain full n observations to be able to uniquely identify the initial state of the linear system, even when the observations are picked randomly, when the initial condition is known to be sparse.



In this paper we demonstrate a simple heuristic adaptive restart technique that can dramatically improve the convergence rate of accelerated gradient schemes. The analysis of the technique relies on the observation that these schemes exhibit two modes of behavior depending on how much momentum is applied. In what we refer to as the 'high momentum' regime the iterates generated by an accelerated gradient scheme exhibit a periodic behavior, where the period is proportional to the square root of the local condition number of the objective function. This suggests a restart technique whereby we reset the momentum whenever we observe periodic behavior. We provide analysis to show that in many cases adaptively restarting allows us to recover the optimal rate of convergence with no prior knowledge of function parameters.


Analysis of Sparse Representations Using Bi-Orthogonal Dictionaries by Mikko Vehkaperä, Yoshiyuki Kabashima, Saikat Chatterjee, Erik Aurell, Mikael Skoglund, Lars Rasmussen. The abstract reads:

The sparse representation problem of recovering an N dimensional sparse vector x from M < N linear observations y = Dx given dictionary D is considered. The standard approach is to let the elements of the dictionary be independent and identically distributed (IID) zero-mean Gaussian and minimize the l1-norm of x under the constraint y = Dx. In this paper, the performance of l1-reconstruction is analyzed, when the dictionary is bi-orthogonal D = [O1 O2], where O1,O2 are independent and drawn uniformly according to the Haar measure on the group of orthogonal M x M matrices. By an application of the replica method, we show that the typical compression threshold for such D is the same as for the IID Gaussian dictionary.

Fast and Robust Parametric Estimation of Jointly Sparse Channels by Y. Barbotin, M. Vetterli . The abstract reads:
We consider the joint estimation of multipath channels obtained with a set of receiving antennas and uniformly probed in the frequency domain. This scenario fits most of the modern outdoor communication protocols for mobile access or digital broadcasting among others.
Such channels verify a Sparse Common Support property (SCS) which was used in a previous paper to propose a Finite Rate of Innovation (FRI) based sampling and estimation algorithm. In this contribution we improve the robustness and computational complexity aspects of this algorithm. The method is based on projection in Krylov subspaces to improve complexity and a new criterion called the Partial Effective Rank (PER) to estimate the level of sparsity to gain robustness.
If P antennas measure a K-multipath channel with N uniformly sampled measurements per channel, the algorithm possesses an O(KPNlogN) complexity and an O(KPN) memory footprint instead of O(PN^3) and O(PN^2) for the direct implementation, making it suitable for K << N. The sparsity is estimated online based on the PER, and the algorithm therefore has a sense of introspection being able to relinquish sparsity if it is lacking. The estimation performances are tested on field measurements with synthetic AWGN, and the proposed algorithm outperforms non-sparse reconstruction in the medium to low SNR range (< 0dB), increasing the rate of successful symbol decodings by 1/10th in average, and 1/3rd in the best case. The experiments also show that the algorithm does not perform worse than a non-sparse estimation algorithm in non-sparse operating conditions, since it may fall-back to it if the PER criterion does not detect a sufficient level of sparsity.
The algorithm is also tested against a method assuming a "discrete" sparsity model as in Compressed Sensing (CS). The conducted test indicates a trade-off between speed and accuracy.





Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, April 24, 2012

A Compressive Sensing Calibration Postdoc in Paris.

sciseekclaimtoken-4f96ab8e5aab4

Laurent just sent me the following great Postdoc:

A 12-month postdoc position will be available (subject to successful funding) at the Langevin Institute, Paris (France), on the topic of 
Calibration of compressed sensing experimental devices 
The goal of this post-doc is to revisit the classic problem of sensor array calibration in the framework of compressed sensing. Most of the compressed sensing theory assumes that the 'measurement matrix' is perfectly known. However, in many practical cases, this measurement matrix is only known approximately, sometimes because it is based on a somehow inaccurate model, sometimes because calibration measurements are noisy. Unfortunately, even small errors on this measurement matrix ruin the reconstruction by most compressed sensing algorithms.
The goal of this study is to devise new model-based calibration strategies for such acquisition devices, tailored to the specific experiments at the Langevin Institute, in acoustics and/or optics.
Candidates should hold a PhD in Signal processing, Applied mathematics, or related domain, with some mandatory experience in sparse representations and/or compressed sensing. Good programming skills (Matlab) and ability to work independently are required.
The ability to interact with physics experimentalists (acoustics / optics) is essential.
Start date : no earlier than July 1st, 2012.
Informal enquiries should be made *before May 1st*, 2012, by email to Prof. Laurent Daudet :  laurent.daudet@espci.fr
The Langevin Institute ( http://www.institut-langevin.espci.fr/Langevin-Institute ) is a physics laboratory that investigates all aspects of waves (acoustics / optics), attached to the prestigious ESPCI engineering school. It is centrally located in the vibrant 5th "arrondissement" of Paris.
I posted this announcement in the Calibration Club on LInkedIn.


sciseekclaimtoken-4f96a8ea1540a


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, April 23, 2012

Around the blogs in 80 hours: Compressive Sensing and More




After featuring Randy's blogging yesterday in Of Paradigm Shifts and Regime Changes, here are some other news of interest to Compressive sensing and beyond:
  • Bob asks a question about a compressive sensing/l1 minimization classifier after the dubious results in his search for a good music genre classifier. He is right, you can get 92% classification capabilities but it ain't about Music genre: Music genre recognition results
  • In a similar vein, Seth tells us about a not-so-optimal database for cancer research. I am sure current databases are a whole lot more accurate.
  • Looking back at what happened last week and the sudden changes in terms of technology development, David Gorski's words (focused on preclinical research) take a specific importance
  • :".... One of the most difficult aspects of science to convey to the general public about science-based medicine (and science in general) is just how messy it is. Scientists know that early reports in the peer-reviewed literature are by their very nature tentative and have a high probability of ultimately being found to be incorrect. Unfortunately, that is not science as it is imbibed by the public. Fed by too-trite tales of simple linear progressions from observation to theory to observation to better theory taught in school, as well as media portrayals of scientists as finding answers fast, most people seem to think that science is staid, predictable, and able to generate results virtually on demand. This sort of impression is fed even by shows that I kind of like for their ability to excite people about science, for instance CSI: Crime Scene Investigation and all of its offspring and many imitators. These shows portray beautiful people wearing beautiful pristine lab coats back lit in beautiful labs using perfectly styled multicolored Eppendorf tubes doing various assays and getting answers in minutes that normally take hours, days, or sometimes weeks. Often these assays are all done over a backing soundtrack consisting of classic rock or newer (but still relatively safe) “alternative” rock. And that’s just for applied science, in which no truly new ground is broken and no new discoveries are made...."
Talking about competitive technology, I just saw this plan for a drill-robot for Europa. Why would this be a consideration when you can "simply" use RTGs to do the job. I am not sure I understand the constraint for not going that route.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, April 22, 2012

Of Paradigm Shifts and Regime Changes

One of the reasons some technologies do not become fully mature or simply die off comes from a competition effect. If you are developing a technology based on the fact that some sensors are expensive, such is the case in some regimes for compressive sensing,  then you are in a competition with not just similar technologies (different sampling techniques) but also with technologies that aim for a much cheaper sensor. Case in point, Randy pointed out to us this past week, the emergence of two or three CMOS based solutions for Terahertz imaging:
for some background








Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, April 21, 2012

Just on the right side of Impossible (part 2)

In Just on the right side of Impossible, I mentioned several ideas that I think could be ideas for frighteningly ambitious start-ups. Three items came up in the news this week related to some of these ideas, here they are:


Communicating with Animals : Is a Dolphin a Person? There is a slow recognition that animals are getting to a status we never thought they had. We know they have language, we empirically catch some of the signs but there is no "thing" that permits us to communicate efficiently with pets and other animals. Once we have that "thing", the question as to whether a Dolphin a Person? becomes far more frightening.


Real Telepresence: Skype recruiting Xbox developers for 'next generation services  The issue at hand is the total disruption of the travel industry. I expressed some of my thoughts on the subject earlier see Great Thoughts Friday: The Energy and Time of Being There. and Kinect Teleconferencing with Real-time 3D Capture and 3D Display





On a technical note one wonders if the XBox/Kinect is really what is needed. One wonders if compressive imaging could help reduce the Kinect bulk.


New Types of Sensors: 

A UWB motion detector and a Phone camera that can "see" radiation:

Xandem's security sensors can see through walls. The Tomographic Motion Detection (TMD) described here. Here is the video presentation:



 
 Here is a demosntration






Of interest are the attendant publications that led to this product:
Remember, with Cable, we mentioned the ability to extract radiation data from a webcam. Well, I noticed an android App named GammaPixLite trying to do the same.. I note these interesting notes (emphasis mine):


Notes: 
A 20-minute initialization is required before you use the application for the first time. We know this step slows you down, but it really is necessary to get the best results. Please perform this step in a place you know is likely to be free of excessive radioactivity.
For best results make sure no light is getting into the camera when you run the GammaPix Lite app. Putting the phone in your pocket or covering it with a book works well.
A reading takes about 5 minutes if there's no danger. Dangerous levels will be reported sooner.
The GammaPix App will not work on some phone models (e.g. Motorola Atrix) because the camera optimization at low light levels gives "bright" pictures.
Not all phone models have been calibrated. The reported range will be large for devices without calibrations. Try out the app anyway. We'll rush calibrations for the most popular phones running the GammaPix Lite App.
This free Lite version requires a network connection to run and expires June 30, 2012. A full version with more features is planned before then, but we wanted to share what we've already built!

The Robust PCA approach we take with Cable is precisely that you can take a video of something that is not moving and get a radiation reading. Here the app forbids any amount of light to get on the CMOS chip. Let us note that this is not the same as a gamma camera aka a coded aperture camera. In either the webcam or the android application, we do not put anything in front of the sensing system to localize the source. Therefore, both the webcam and the android app are providing a radiation measurement with no information about the source of that radiation. 

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, April 20, 2012

Cable And Igor's adventures: The Not So Invisible Mercedes

At 51 seconds into this promotional video about an Invisible Mercedes, Cable And I  decided to extract a small scene and see if we could entirely remove the Mercedes from that scene and detect its appearance. First, let's check the ad clip:





 

 and here is the Robust PCA decomposition using Godec in slow motion (take a look at the image on top right side)

   
 and in normal speed
 


No need for complex image processing, just advanced Matrix factorization,  Mission Accomplished !



CAI: It's Friday, let's use Robust PCA on UFOs.

It's Friday, Cable and Igor's Adventure in Matrix Factorization will feature UFOs, woohoo! At 41 seconds into this video, a satellite crosses the field of view of the scene with the Moon as the backdrop.


and yes, it's an Unidentified Flying Object. Here is the Robust PCA decomposition, enjoy!

Approximation of Points on Low-Dimensional Manifolds Via Random Linear Projections

Here is something that appeared on the Arxiv radar screen: Approximation of Points on Low-Dimensional Manifolds Via Random Linear Projections by Mark A. Iwen, Mauro Maggioni. The abstract reads:
This paper considers the approximate reconstruction of points, x \in R^D, which are close to a given compact d-dimensional submanifold, M, of R^D using a small number of linear measurements of x. In particular, it is shown that a number of measurements of x which is independent of the extrinsic dimension D suffices for highly accurate reconstruction of a given x with high probability. Furthermore, it is also proven that all vectors, x, which are sufficiently close to M can be reconstructed with uniform approximation guarantees when the number of linear measurements of x depends logarithmically on D. Finally, the proofs of these facts are constructive: A practical algorithm for manifold-based signal recovery is presented in the process of proving the two main results mentioned above.
The  Geometric Multi-Resolution Analysis Code is here.  Mauro  tells me that "...We will be uploading the code in the next few days, as part of an update to the Geometric Multi-Resolution code...". Stay tuned!






Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, April 19, 2012

SPAMS (SPArse Modeling Software) now with Python and R

Alexandre Gramfort let me know that SPAMS now has Python and R bindings. That is you can now use  SPAMS in Python and R. This is fantastic. What is SPAMS you ask? Well it is listed in several places in the Matrix Factorization Jungle page. From the main page of the project (see also here)


What is SPAMS?
SPAMS (SPArse Modeling Software) is an optimization toolbox for solving various sparse estimation problems.
  • Dictionary learning and matrix factorization (NMF, sparse PCA, ...)
  • Solving sparse decomposition problems with LARS, coordinate descent, OMP, SOMP, proximal methods
  • Solving structured sparse decomposition problems (l1/l2, l1/linf, sparse group lasso, tree-structured regularization, structured sparsity with overlapping groups,...).
See the documentation for all the features. It is developped by Julien Mairal, with the collaboration of Francis Bach (INRIA), Jean Ponce (Ecole Normale Supérieure), Guillermo Sapiro (University of Minnesota), and Rodolphe Jenatton (INRIA). It is coded in C++ with a Matlab interface. Recently, an interface for R and Python have been developed by Jean-Paul Chieze (INRIA).License
Version 2.1 and later are open-source under licence GPLv3It is therefore possible to redistribute the sources with any other software, as long as it under GPL licence. For other usage, please contact the authors.Related publications
You can find here some publications at the origin of this software. The "matrix factorization" and "sparse decomposition" modules were developed for the following papers: The "proximal" module was developed for the following papers: News
03/24/2012: SPAMS v2.2 is released with a Python and R interface, and new compilation scripts for a better Windows/Mac OS compatibility. 06/30/2011: SPAMS v2.1 goes open-source! 11/04/2010: SPAMS v2.0 is out for Linux and Mac OS! (Windows version and C++ interface coming soon) 02/23/2010: Windows 32 bits version available! Elastic-Net is implemented. 10/26/2009: Mac OS, 64 bits version available!






Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly