Thursday, July 24, 2014

Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed $\ell_1/ell_2$ Regularization

Laurent Duval just sent me the following:

Dear Igor
Nuit Blanche is a nest of choice for many sparsities. The ones of concern here are those approximated by an $l_1/l_2$, or Taxicab/Euclidean norm ratio, which was already covered in some of your posts: 
We propose in the following preprint a smoothed, parametrized penalty, termed SOOT for "Smoothed-One-Over-Two" norm ratio, with results on its theoretical convergence, and an algorithm based on proximal methods. It is applied to blind deconvolution, here for seismic data. We hope it could be of interest to your readership. 
[Abstract]
The $\ell_1/ell_2$ ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context.
However, the $\ell_1/ell_2$ function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such regularization penalties in current restoration methods.
In this paper, we propose a new penalty based on a smooth approximation to the $\ell_1/ell_2$ function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact $\ell_1/ell_2$ term, on an application to seismic data blind deconvolution.

[arXiv Link]
Amitiés
Laurent

******** Travail et autres activités/Work and misc. ********

Thanks Laurent !

From the paper: "The code will be made available at http://www-syscom.univ-mlv.fr/ upon the paper acceptance"

All the blind deconvolution blog entries are at: 


Wednesday, July 23, 2014

Slides: Summer School on Hashing: Theory and Applications, 2014



The webpage of the EADS Summer School on Hashing: Theory and Applications that took place on July 14-17, 2014 at University of Copenhagen, Denmark, now features the slides of the presentation made there:


OVERVIEW AND GOAL

Hashing is used everywhere in computing and is getting increasingly important with the exploding amount of data. The summer school will provide an in-depth introduction to hashing, both theory and applications. The topics range will from modern theory of hashing, to actual implementations of hash functions that are both efficient and provide the necessary probabilistic guarantees. Application areas will be studied, from sketching and data bases, to similarity estimation, and machine learning.


Rasmus Pagh
Dictionaries with implicit keys (July 15)
Michael Mitzenmacher
Bloom Filters and Such (July 14)
Cuckoo hashing and balanced allocations (July 15)
Mikkel Thorup
High speed hashing for integers and strings (July 14)
Reliable hashing for complex applications (July 15)
Graham Cormode

Complexity-Matching Universal Signal Estimation in Compressed Sensing



In light to a recent update on ArXiv, I asked Dror Baron to provide me some context on his different papers related to universal signal recovery, here is what he had to say: 

Hello Igor,

Here's a link to a recent submission: http://arxiv.org/abs/1204.2611
I know that we have multiple related algorithms recently, so let me try to explain:

1. In a compressed sensing problem, y=A*x+z, this work is trying to solve xhat = argmin_w H(w)-log(f_Z(zhat=y-A*w)), where xhat is our estimate for x given y and A, w is a hypothesized solution, H(.) is entropy (in our case empirical entropy, which serves as a sort of universal coding length), and f_Z(.) is the density function for the noise. This algorithm seems to approach the minimum mean square error (MMSE) up to 3 dB or so, which is theoretically motivated. Our optimization algorithm relies on Markov chain Monte Carlo (MCMC).

2. In our paper from last week, we used a universal denoiser within approximate message passing. We hope that with some bells and whistles the algorithm might consistently outperform MCMC by that 3 dB gap.

Please feel free to let us know if you have any questions!

Dror
--
Dror Baron, Ph.D.
Assistant Professor
Electrical and Computer Engineering Department
North Carolina State University


We study the compressed sensing (CS) signal estimation problem where an input signal is measured via a linear matrix multiplication under additive noise. While this setup usually assumes sparsity or compressibility in the input signal during recovery, the signal structure that can be leveraged is often not known a priori. In this paper, we consider universal CS recovery, where the statistics of a stationary ergodic signal source are estimated simultaneously with the signal itself. Inspired by Kolmogorov complexity and minimum description length, we focus on a maximum a posteriori (MAP) estimation framework that leverages universal priors to match the complexity of the source. Our framework can also be applied to general linear inverse problems where more measurements than in CS might be needed. We provide theoretical results that support the algorithmic feasibility of universal MAP estimation using a Markov chain Monte Carlo implementation, which is computationally challenging. We incorporate some techniques to accelerate the algorithm while providing comparable and in many cases better reconstruction quality than existing algorithms. Experimental results show the promise of universality in CS, particularly for low-complexity sources that do not exhibit standard sparsity or compressibility.


Tuesday, July 22, 2014

Context Aware Recommendation Systems ( Lei Tang, Xavier Amatriain)



Much like the presentation by Lei Tang (Wallmart Labs) on Adaptive User Segmentation for Recommendation at last year's GraphLab 2013 (see Slides (pdf) here and video here). Xavier Amatriain, of Netflix, made a presentation of what we should be expecting in terms of recommendation. The idea here is that most of this work cannot be static otherwise your customers just won't be responsive to it. Here are his slides and the attendant videos from the Machine Learning Summer School organized in Pittsburgh 2014 by Alex Smola. I note the focus put on matrix and tensor factorizations and the persistent reference to blog posts. It's a new world...more on that later.

Dynamic MR image reconstruction–separation from undersampled (k,t)-space via low-rank plus sparse prior - implementation -



Benjamin Trémoulhéac just sent me the following:

Dear Igor,

You and your readers might be interested in my paper recently published (early view) which is about the use of the RPCA (or L+S) model in dynamic MR imaging from partial Fourier samples for both reconstruction and separation:
Dynamic MR image reconstruction–separation from undersampled (k,t)-space via low-rank plus sparse prior
(This is an open access article thanks to the new policy in the UK)
I have made available an implementation of the algorithm in matlab here
Note that interestingly Otazo et al. have published almost simultaneously a very similar work in a different journal:

Otazo et al, Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components, 2014
Yet there are some differences, so these papers are kind of complementary.

Thanks!
Best regards,
Benjamin Trémoulhéac
Thank you Benjamin. Here is the paper:



Dynamic magnetic resonance imaging (MRI) is used in multiple clinical applications, but can still benefit from higher spatial or temporal resolution. A dynamic MR image reconstruction method from partial (k-t)-space measurements is introduced that recovers and inherently separates the information in the dynamic scene. The reconstruction model is based on a low-rank plus sparse decomposition prior, which is related to robust principal component analysis. An algorithm is proposed to solve the convex optimization problem based on an alternating direction method of multipliers. The method is validated with numerical phantom simulations and cardiac MRI data against state of the art dynamic MRI reconstruction methods. Results suggest that using the proposed approach as a means of regularizing the inverse problem remains competitive with state of the art reconstruction techniques. Additionally, the decomposition induced by the reconstruction is shown to help in the context of motion estimation in dynamic contrast enhanced MRI.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems

Interesting review of the TT approach:





In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. %The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations %"super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, July 21, 2014

Video Stream: GraphLab Conference 2014



We mentioned it before, the GraphLab conference is on and it is streamed live hereThe program is here (the Twitter tag seems to be #GraphLabConf )

Day 1: Monday, July 21, 2014General Admission. Registration opens at 8:00am.Session 
1: Data Product Pipeline in Practice
  • 9:00am Prof. Carlos Guestrin Co-Founder & CEO, GraphLab Keynote: GraphLab Strategy, Vision and Practice
  • 10:10am Baldo Faieta Social Computing Lead, Adobe Systems Algorithms for Creatives Talent Search using GraphLab
  • 10:30am Amit Moran Chief Data Scientist, Crosswise Customer Spotlight: Crosswise
  • 10:40am Coffee Break (20 mins)
Session 2: Data Science
  • 11:00am Alice Zheng Director of Data Science, GraphLab Machine Learning Toolkits in GraphLab Create
  • 11:20am Karthik Ramachandran, Erick Tryzelaar Lab41 Dendrite large scale graph analytics
  • 11:40am Tao Ye Sr. Scientist, Pandora Internet Radio Large scale music recommendation @ Pandora
  • 12:00pm Prof. Alex Smola CMU and Google Scaling Distributed Machine Learning with the Parameter Server
  • 12:20pm Jonathan Dinu Co-Founder, Zipfian Academy Customer Spotlight: Zipfian Academy
  • 12:30pm Lunch (70 mins)
Session 3: Data Engineering
  • 1:40pm Yucheng Low Co-Founder & Chief Architect, GraphLab Scalable Data Structures: SFrame & SGraph
  • 2:00pm Prof. Joe Hellerstein Co-Founder & CEO, Trifacta Data, DSLs and Transformation: Research and Practice
  • 2:20pm Reynold Xin Co-Founder, Databricks Unified Data Pipeline in Apache Spark
  • 2:40pm Wes McKinney Founder & CEO, DataPad Fast Medium Data Analytics at Scale
  • 3:00pm Coffee Break (20 mins)
Session 4: Deployment
  • 3:20pm Rajat Arya Senior Software Engineer, GraphLab Deployment with GraphLab Create
  • 3:40pm Milind Bhandarkar Chief Scientist, Pivotal The Zoo Expands: Labrador ♥ Elephant thanks to Hamster
  • 4:00pm Prof. Vahab Mirrokni Google Research ASYMP: Fault-tolerant Graph Mining via ASYnchronous Message Passing
  • 4:20pm Josh Wills Director of Data Science, Cloudera What Comes After The Star Schema?
  • 4:40pm Dr. Markus Weimer Microsoft Research REEF: Towards a Big Data stdlib
  • Session 5: Networking and Demos (5:00-7:00pm)



Day 2: Tuesday, July 22, 2014Training Admission. Registration opens at 8:00am.
GraphLab Create Hands-on Training
The goal of the day is to teach participants how to build a machine learning system at scale from prototype to production using GraphLab Create. A laptop is required to participate.
  • 9:30am Alice Zheng Director of Data Science, GraphLab Introduction
  • 9:45am Yucheng Low Co-Founder & Chief Architect, GraphLab Prepping Data for Analysis: Using GraphLab Create Data Structures and GraphLab Canvas
  • 10:30am Coffee Break (15 mins)
  • 10:45am Srikrishna Sridhar Data Scientist, GraphLab Supervised Learning: Regression and Classification
  • 11:15am Brian Kent Data Scientist, GraphLab Unsupervised Learning: Clustering, Nearest Neighbors, Graph Analysis
  • 11:45am Hands-on Training Exercises and Lunch
  • 1:45pm Chris Dubois Data Scientist, GraphLab Recommender Systems and Text Analysis
  • 2:15pm Coffee Break (15 mins)
  • 2:30pm Rajat Arya Sr. Software Engineer, GraphLab Deployment
  • 3:15pm Hands-on Training Exercises
  • 4:00pm Danny Bickson Co-Founder & Data Scientist, GraphLab Practical Data Science Tips
  • 4:45pm Alice Zheng Director of Data Science, GraphLab Closing Remarks




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

More Than 20 Artificially Intelligent Space Probes Were Already Not Contained in the Frame of this Picture

You've probably heard the meme about "Michael Collins being the Only Human, Living or Dead, Not Contained in the Frame of this Picture"





Here is another:

"By the time this picture was taken, More Than 20 Artificially Intelligent Space Probes Were Already Not Contained in the Frame of this Picture"

Some folks might argue that the space probes were not intelligent since they were commanded and controlled from Earth. This is not exactly true. There was at least one algorithm running on all these probes and landers that was making an inference on board: The star tracking algorithm

Slides: Science on the Sphere

From Thomas Kitching's Weak lensing on the sphere

Jason McEwen let me know that the presentation slides of the Science on the Sphere seminar is out on the Late Universe blog (a blog about Explorations in survey cosmology, theoretical physics, signal processing, and Bayesian inference). Here it is:

Day 1
0730 – 0830 : Breakfast
Introduction and Discussion of Objectives of the Meeting
0845 – 0900 : Dr Thomas Kitching & Dr Jason McEwen (UCL):
Welcome
0900 – 0930 : Dr Jason McEwen (UCL):
0930 – 1000 : Prof. Alan Heavens (Imperial College London):
1000 – 1030 : Dr Yves Wiaux (Heriot-Watt):
1030 – 1100 : Dr Thomas Kitching (UCL):
1100 – 1130 : Break
Foundations: Mathematics, Wavelets and Correlations on the Sphere
1130 – 1200 : Prof. Domenico Marinucci (University of Rome):
1200 – 1230 : Mr Boris Leistedt (UCL):
1230 – 1300 : Prof. Frederik Simons (Princeton):
1300 – 1430 : Lunch
The Cosmological Context
1430 – 1500 : Prof. Andrew Jaffe (Imperial College London):
1500 – 1530 : Mr Francoise Lanusse (CEA Saclay):
1530 – 1600 : Break
Perspectives from Informatics
1600 – 1630 : Dr Hiranya Peiris (UCL):
1630 – 1700 : Dr Chris Doran (Geomerics Ltd.):
1700 onwards : Discussion Session lead by Dr Jason McEwen (UCL) and Dr Thomas Kitching (UCL)
Day 2
0730 – 0830 : Breakfast
The Stellar Context
0900 – 1000 : Prof. Bill Chaplin (Birmingham):
1000 – 1100 : Prof. Yvonne Elsworth (Birmingham):
1100 – 1130 : Break
Exploring the Sphere: Estimators and Likelihoods
1130 – 1200 : Prof. Mike Hobson (Cambridge):
1200 – 1230 : Dr Farhan Feroz (Cambridge):
1230 – 1300 : Prof. Ben Wandelt (Insitute Astrophysique Paris):
1300 – 1400 : Lunch
Signal Processing on the Sphere
1400 – 1430 : Prof. Rod Kennedy (ANU):
1430 – 1500 : Prof. Pierre Vandergheynst (EPFL):
1500 – 1530 : Dr Richard Shaw (CITA):
1530 – 1600 : Break
Discussions and the Way Forward
1600 – 1700 : Dr Jason McEwen (UCL), Dr Thomas Kitching (UCL)

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly