Pages

Monday, December 29, 2014

A Short Review of 2014 on Nuit Blanche: On Compressive Sensing, Machine Learning, Advanced Matrix Factorization and many beautiful things.

2014 was an exceptional year! One that seemed focused on exploration.

First and foremost, in order to get a real sense of what was accomplished you probably need to read the last 12 Nuit Blanche Monthly Reviews.


The most important event of the year, is in my view the release of two genome sequencers that enable nearly any polynomial time algorithms to perform sequence alignment (Sunday Morning Insight: Crossing into P territory). It is quite simply a momentous event. In terms of data and algorithm most think of it as a faraway mountain range that will eventually need to be explored and studied. It's the opposite, think of it as the wave on Miller. Which is why one of the most fascinating video of the year was that of Craig Venter entitled Life at the Speed of Light. More on that later.



In terms of physical exploration, Philae landed on a comet, but one of the image I saw on my twitter feed this year provided me with the equivalent of what Nature does: Bacteriophage


The human made robot bumped three times to eventually land on the comet (we also lost it) but in Nature, do Bacteriophages have similar issues ? I'd love to know more about what we know about the  bacteriophage navigation algorithm!

Anyway, several trends could be witessed from a few of the 536 entries of 2014. Let me note the arrival of a few Machine Learning subjects as there seems to be a convergence between some supervised learning and compressive sensing (and my involvement as a co-organizer of one of the largest Machine Learning meetup).Without further ado, here is my biased take, we witnessed:

Personally, I also got back into publishing mode. Our paper also resulted in a few blog entries:
I also did an open review (... There will be a "before" and "after" this paper ...) and did a poster presentation at JIONC on the connection between Direct Imaging to Machine Learning. I also provided some tips on Organizing Meetups and got Nuit Blanche featured on La Recherche of February.

Some additional stats can be found in:

 Credits:
- Courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
- "PhageExterior" by Adenosine - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:PhageExterior.svg#mediaviewer/File:PhageExterior.svg
-  ESA / Wikipedia
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, December 28, 2014

Nuit Blanche in Review ( December 2014 )

Since the last Nuit Blanche in Review ( November 2014 )
we had much machine learning related entries as we are seeing a decreasing gap between what we currently see as reconstruction solvers in compressive sensing and the current deep (or not) architectures used in Machine learning to perform classification. Without further ado :

We had two implementations this month:
 Three "insights"

a few books and theses:

 Machine Learning
Paris Machine Learning Meetup
 Video and Slides / Slides
Jobs:
Other videos:

A few new statistics
 Credit: Courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.

Book: The Discrepancy Method by Bernard Chazelle


In a recent video presentation when Vikas Sindhwani talked about Random Embeddings, Matrix-valued Kernels and Deep Learning, he mentioned that instead of using purely random features we ought to be looking at Quasi-Monte Carlo series of numbers [1]. Sebastien Bubeck just hosted a blog post pn the subject entitled Beating Monte Carlo a guest post by Sasho Nikolov which linked to the pdf version of this book by Bernard Chazelle on the Discrepancy Method.


[1] Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels Jiyan Yang, Vikas Sindhwani, Haim Avron, Michael Mahoney


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Video: Alexander Gerst’s Earth timelapses

 
of specific interest is the video starting at about 4 minutes and 38 seconds, with the overflight of France in about 8 seconds but real time video would really last about 2 minutes and 15 seconds.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 26, 2014

Thesis: Quantum Algorithms for Linear Algebra and Machine Learning; Anupam Prakash



Quantum Algorithms for Linear Algebra and Machine Learning by Anupam Prakash
Most quantum algorithms o ering speedups over classical algorithms are based on the three techniques of phase estimation, amplitude estimation and Hamiltonian simulation. In spite of the linear algebraic nature of the postulates of quantum mechanics, until recent work by Lloyd and coauthors (23; 22; 24) no quantum algorithms achieving speedups for linear algebra or machine learning had been proposed. A quantum machine learning algorithm must address three issues: encoding of classical data into a succinct quantum representation, processing the quantum representation and extraction of classically useful information from the processed quantum state. In this dissertation, we make progress on all three aspects of the quantum machine learning problem and obtain quantum algorithms for low rank approximation and regularized least squares.

The oracle QRAM, the standard model studied in quantum query complexity, requires time O(pn) to encode vectors v2Rn into quantum states. We propose simple hardware augmentations to the oracle QRAM, that enable vectors v2Rn to be encoded in time O(logn), with pre-processing. The augmented QRAM incurs minimal hardware overheads, the pre-processing can be parallelized and is a exible model that allows storage of multiple vectors and matrices. It provides a framework for designing quantum algorithms for linear algebra and machine learning.

Using the augmented QRAM for vector state preparation, we present two di erent algorithms for singular value estimation where given singular vector jvi forA2 Rm n, the singular value i is estimated within additive error kAkF. The rst algorithm requires time eO(1= 3 ) and uses the approach for simulating e CX and CUR decompositions. Classical algorithms for these problems require time O(mnlog n+poly(1= )), the quantum algorithms have running time O(p mpoly(1 = ;k; )) where k; are the rank and spectral gap. The running time of the quantum CX decomposition algorithm does not depend on m, it is polynomial in problem parameters. We also provide quantum algorithms for`2 regularized regression problems, the quantum ridge regression algorithm requires time eO(1= 2 )to output a quantum state that is close to the solution, where is the regularization parameter.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Unsupervised Learning of Spatiotemporally Coherent Metrics

 
 From the paper:
 
Non-linear operators consisting of a redundant linear transformation followed by a point-wise non-linearity and a local pooling, are fundamental building blocks in deep convolutional networks. This is due to their capacity to generate local invariance while preserving discriminative information Le-Cun et al. (1998); Bruna & Mallat (2013).
 
 
Unsupervised Learning of Spatiotemporally Coherent Metrics by Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann LeCun

Current state-of-the-art object detection and recognition algorithms rely on supervised training, and most benchmark datasets contain only static images. In this work we study feature learning in the context of temporally coherent video data. We focus on training convolutional features on unlabeled video data, using only the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We show that nearest neighbors of a query frame in the learned feature space are more likely to be its temporal neighbors.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 25, 2014

Tag-Aware Ordinal Sparse Factor Analysis for Learning and Content Analytics

So it looks like Andrew Lan et al improved on SPARFA, a matrix factorization technique to help in education analytics. We had Andew talk at the Paris Machine Learning meetup #11 season 1.


Tag-Aware Ordinal Sparse Factor Analysis for Learning and Content Analytics by Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk

Machine learning offers novel ways and means to design personalized learning systems wherein each student's educational experience is customized in real time depending on their background, learning goals, and performance to date. SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based learning analytics, which estimates a learner's knowledge of the concepts underlying a domain, and content analytics, which estimates the relationships among a collection of questions and those concepts. SPARFA jointly learns the associations among the questions and the concepts, learner concept knowledge profiles, and the underlying question difficulties, solely based on the correct/incorrect graded responses of a population of learners to a collection of questions. In this paper, we extend the SPARFA framework significantly to enable: (i) the analysis of graded responses on an ordinal scale (partial credit) rather than a binary scale (correct/incorrect); (ii) the exploitation of tags/labels for questions that partially describe the question{concept associations. The resulting Ordinal SPARFA-Tag framework greatly enhances the interpretability of the estimated concepts. We demonstrate using real educational data that Ordinal SPARFA-Tag outperforms both SPARFA and existing collaborative filtering techniques in predicting missing learner responses.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A la Carte - Learning Fast Kernels

Here is a specific tweak to FastFood in order to enable a large set of kernels to better fit group theoretic transformation. It is a little bit a similar path taken in the ScatNet approach.


A la Carte - Learning Fast Kernels by Zichao Yang, Alexander J. Smola, Le Song, Andrew Gordon Wilson

Kernel methods have great promise for learning rich statistical representations of large modern datasets. However, compared to neural networks, kernel methods have been perceived as lacking in scalability and flexibility. We introduce a family of fast, flexible, lightly parametrized and general purpose kernel learning methods, derived from Fastfood basis function expansions. We provide mechanisms to learn the properties of groups of spectral frequencies in these expansions, which require only O(mlogd) time and O(m) memory, for m basis functions and d input dimensions. We show that the proposed methods can learn a wide class of kernels, outperforming the alternatives in accuracy, speed, and memory consumption.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 24, 2014

On the Stability of Deep Networks

Wow! after this, this and this, we are now converging:

On the Stability of Deep Networks by Raja Giryes, Guillermo Sapiro, Alex M. Bronstein

In this work we study the properties of deep neural networks with random weights. We formally prove that these networks perform a distance-preserving embedding of the data. Based on this we then draw conclusions on the size of the training data and the networks' structure.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSjobs: Marie Curie Early Stage Researchers in Sparse Representations and Compressed Sensing

Mark just sent me the following:
 
Dear Igor,

We thought that readers of Nuit Blanche might be interested in these "Early Stage Researcher" (ESR) positions for PhD study, as part of our new "SpaRTaN" EU-funded Marie Curie Initial Training Network in Sparse Representations and Compressed Sensing. These positions come with a very competitive salary compared to regular PhD scholarships, so might also be of interest to young researchers who had not otherwise thought of taking a PhD.
Merry Christmas! Mark Plumbley
----

Marie Curie Early Stage Researchers in Sparse Representations and Compressed Sensing

EU FP7 Marie Curie Initial Training Network "SpaRTaN: Sparse Representations and Compressed Sensing Training Network" (FP7-PEOPLE-2013-ITN 607290)

Project Website: http://spartan-itn.eu/

Applications are invited to a number of Marie Curie Early Stage Researcher (ESR) positions as part of the new EU-funded Marie Curie Initial Training Network (ITN) "SpaRTaN: Sparse Representations and Compressed Sensing Training Network".

The SpaRTaN ITN (http://spartan-itn.eu/) brings together leading academic and industry groups to train a new generation of interdisciplinary researchers in sparse representations and compressed sensing, with applications in areas such as hyperspectral imaging, audio signal processing and video analytics.

Early Stage Researcher (ESR) positions allow the researcher to work towards a PhD, for a duration of 36 months. ESRs should be within four years of the diploma granting them access to doctorate studies at the time of recruitment, and must not have spent more than 12 months in the host country in the 3 years prior to starting. Marie Curie ESRs are paid a competitive salary which is adjusted for their host country.

Each of the ESR posts being recruited across SpaRTaN has its own application process and closing date in early 2015. The full list of Early Stage Researcher (ESR) Positions (recruiting early 2015) is as follows:

* ESR1 : Sparse Time-Frequency methods for Audio Source Separation - CVSSP, University of Surrey, United Kingdom
* ESR2 : Automatic Music Transcription using Structured Sparse Dictionary Learning - CVSSP, University of Surrey, United Kingdom
* ESR3 : Sparse Representations and Compressed Sensing - University of Edinburgh, United Kingdom
* ESR4 : Task Based Dictionary Learning for Audio-Visual Tagging - LTS2, EPFL,Switzerland
* ESR5 : 1-bit Compressive Imaging - LTS2, EPFL,Switzerland
* ESR6 : Analysis Dictionary Learning Beyond Gaussian Denoising - Instituto de Telecomunicações, Portugal
* ESR7 : Compressed Sensing for Hyperspectral Imaging - Instituto de Telecomunicações, Portugal
* ESR8 : Large-scale signal processing - INRIA, France

For further details, see http://spartan-itn.eu/#1

The following experienced Researcher (ER) positions will also be recruiting later in 2015:

* ER1 : Video Analytics for Large Camera Networks - VisioSafe, Switzerland
* ER2 : Image and Video Restoration with Adaptive Transforms - Noiseless Imaging, Finland

For more details of all ESR positions and future ER positions, and information on how to apply, see http://spartan-itn.eu/#1

--
Prof Mark D Plumbley
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing (CVSSP)
University of Surrey
Guildford, Surrey, GU2 7XH, UK
Email: m.plumbley@surrey.ac.uk 
 
Thanks Mark and Merry Festivus for the rest of us.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray CT

Jakob, a map maker, just sent me the following:
Dear Igor,


Thank you for keeping the Nuit Blanche an important CS resource that I have been following for several years now.


I thought you and some readers might be interested in an update on our work on phase transitions in computed tomography that you recently posted about:
That paper has been published online (with open access) by Inverse Problems in Science and Engineering:


Furthermore, we have just put on arXiv our latest results:
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray CT by Jakob S. Jørgensen and Emil Y. Sidky

where we compare the empirically observed phase transitions in CT with the theoretical results for Gaussian matrices i.e. the Donoho-Tanner phase transition as well as the results by Amelunxen, Lotz, McCoy and Tropp:
Surprisingly, at least to us, we find essentially the same phase-transition behavior in CT as in Gaussian case!


Further we study the use of randomized sampling strategies in CT and give preliminary results on how well the phase transitions can predict the sufficient number of samples to acquire in a large-scale CT setting depending on the sparsity.


Happy holidays and best wishes,
Jakob

--
Jakob Sauer Jørgensen

Postdoc Scientific Computing
------------------------------------------------------
Technical University of Denmark Department of Applied Mathematics and Computer Science Matematiktorvet Building 303 B
2800 Kgs. Lyngby
Thanks Jakob ! Here is the paper: How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray CT by Jakob S. Jørgensen, Emil Y. Sidky

We introduce phase-diagram analysis, a standard tool in compressed sensing, to the X-ray CT community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In compressed sensing a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling: First we demonstrate that there are cases where X-ray CT empirically performs comparable with an optimal compressed sensing strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared to standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, December 23, 2014

Deep Fried Convnets

Very interesting. Thanks for this summary. I loved the paper of Jimmy Ba and Rich Caruana. Here's one of our recent takes on the issue of kernels, randomization and new forms of deep layerd: http://arxiv.org/abs/1412.7149

In all these, I think it's important to analyse performance on large datasets such as imagenet.
Thanks Nando !  another piece of the puzzle for the construction of reconstruction solvers:



The fully connected layers of a deep convolutional neural network typically contain over 90% of the network parameters, and consume the majority of the memory required to store the network parameters. Reducing the number of parameters while preserving essentially the same predictive performance is critically important for operating deep neural networks in memory constrained environments such as GPUs or embedded devices.
In this paper we show how kernel methods, in particular a single Fastfood layer, can be used to replace all fully connected layers in a deep convolutional neural network. This novel Fastfood layer is also end-to-end trainable in conjunction with convolutional layers, allowing us to combine them into a new architecture, named deep fried convolutional networks, which substantially reduces the memory footprint of convolutional networks trained on MNIST and ImageNet with no drop in predictive performance.

 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

Using blind deconvolution and a Venus transit to figure out the calibration of working camera onboard a current spacecraft. This is the feat of this paper: 




Non-parametric PSF estimation from celestial transit solar images using blind deconvolution by Adriana Gonzalez, Véronique Delouille, Laurent Jacques

Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. Optics are never perfect and the non-ideal path through the telescope is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Other sources of noise (read-out, Photon) also contaminate the image acquisition process. The problem of estimating both the PSF filter and a denoised image is called blind deconvolution and is ill-posed.
Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, it does not assume a parametric model of the PSF and can thus be applied to any telescope.
Methods: Our scheme uses a wavelet analysis image prior model and weak assumptions on the PSF filter's response. We use the observations from a celestial body transit where such object can be assumed to be a black disk. Such constraints limits the interchangeability between the filter and the image in the blind deconvolution problem.
Results: Our method is applied on synthetic and experimental data. We compute the PSF for SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA with the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality than parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Distinguishing Cause from Effect using Observational Data: Methods and Benchmarks

More work paralleling that of David Lopez-Paz et al's s Non-linear Causal Inference using Gaussianity Measures - implementation - ( We featured David at Paris Machine Learning #2 Season 2 ). A simple note when one looks at this from the signal processing standpoint. There, we usually set problems as 

y = A x + epsilon

where epsilon is some gaussian noise. In these papers on causality, that specific shape for the noise would tend to imply that y is not the result of x as "residuals in the anti-causal direction are more Gaussian than the residuals in the actual causal direction". Anyway, today we have the following (including a cause and effects pairs database ).




Distinguishing cause from effect using observational data: methods and benchmarks by Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, Bernhard Schölkopf

The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X, Y . This was often considered to be impossible. Nevertheless, several approaches for addressing this bivariate causal discovery problem were proposed recently. In this paper, we present the benchmark data set CauseEffectPairs that consists of 88 different "cause-effect pairs" selected from 31 datasets from various domains. We evaluated the performance of several bivariate causal discovery methods on these real-world benchmark data and on artificially simulated data. Our empirical results provide evidence that additive-noise methods are indeed able to distinguish cause from effect using only purely observational data. In addition, we prove consistency of the additive-noise method proposed by Hoyer et al. (2009).
Of additional interest is a Database with cause-effect pairs 
Related:

Causal Discovery with Continuous Additive Noise Models by Jonas Peters, Joris Mooij, Dominik Janzing, Bernhard Schölkopf
We consider the problem of learning causal directed acyclic graphs from an observational joint distribution. One can use these graphs to predict the outcome of interventional experiments, from which data are often not available. We show that if the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions. This constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected. We provide practical algorithms for finitely many samples, RESIT (Regression with Subsequent Independence Test) and two methods based on an independence score. We prove that RESIT is correct in the population setting and provide an empirical evaluation.  
 Implementations are available here.


Learning Sparse Causal Models is not NP-hard by Tom Claassen, Joris M. Mooij, Tom Heskes
This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modi cation of the well-known FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks

Proof Supplement - Learning Sparse Causal Models is not NP-hard (UAI2013) by Tom Claassen, Joris M. Mooij, Tom Heskes
This article contains detailed proofs and additional examples related to the UAI-2013 submission `Learning Sparse Causal Models is not NP-hard'. It describes the FCI+ algorithm: a method for sound and complete causal model discovery in the presence of latent confounders and/or selection bias, that has worst case polynomial complexity of order N2(k+1) in the number of independence tests, for sparse graphs over N nodes, bounded by node degree k. The algorithm is an adaptation of the well-known FCI algorithm by (Spirtes et al., 2000) that is also sound and complete, but has worst case complexity exponential in N.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.