Saturday, June 25, 2016

Saturday Morning Video: Machine Learning in Computational Biology Workshop, @NIPS2015

 
 
 
The NIPS2015 Workshops videos are out. In particular, we have that of the Machine Learning in Computational Biology workshop today (I am trying to organize a workshop for this coming NIPS and if it accepted I'll make sure that all the videos are taken). Enjoy !

Credit photo: Date: 24 June 2016, Satellite: Rosetta, Depicts: Comet 67P/Churyumov-Gerasimenko
Copyright: ESA/Rosetta/NAVCAM, CC BY-SA IGO 3.0

Rosetta navigation camera (NavCam) image taken on 17 June 2016 at 30.8 km from the centre of comet 67P/Churyumov-Gerasimenko. The image measures 2.7 km across and has a scale of about 2.6 m/pixel.
The image has been cleaned to remove the more obvious bad pixels and cosmic ray artefacts, and intensities have been scaled.
Another version of this image, which has been contrast enhanced, is available here.
More images of comet 67P/Churyumov-Gerasimenko can be found in the '67P - by Rosetta' collection.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 IGO License.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, June 24, 2016

RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision - implementation -

 
While the paper is: RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision by Robert LiKamWa, Yunhui Hou, Yuan Gao, Mia Polansky, Lin Zhong .

Continuous mobile vision is limited by the inability to efficiently capture image frames and process vision features. This is largely due to the energy burden of analog readout circuitry, data traffic, and intensive computation. To promote efficiency, we shift early vision processing into the analog domain. This results in RedEye, an analog convolutional image sensor that performs layers of a convolutional neural network in the analog domain before quantization. We design RedEye to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse. RedEye uses programmable mechanisms to admit noise for tunable energy reduction. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy, 73% reduction in cloudlet-based system energy, and a 45% reduction in computation-based system energy.
 
 The Redee repository is at: https://github.com/JulianYG/redeye_sim that features the following:
RedEye is a vision sensor designed to execute early stages of a deep convolutional neural network (ConvNet) in the analog domain. This repo is a modification of Caffe to train, simulate and visualize analog ConvNet processing under noise vs. energy tradeoffs.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressive light-field microscopy for 3D neural activity recording

Imaging Human Learning thanks to compressive sensing. Yes, devious reader of the blog, I see you nodding on how meta that paper could be framed. It's not a sensor that looks at something in the brain, it's a sensor that decodes a scene using an algorithm that somehow parallels elements of the algorithms being imaged.  That's a different way of looking at The Great Convergence. Without further ado:

Understanding the mechanisms of perception, cognition, and behavior requires instruments that are capable of recording and controlling the electrical activity of many neurons simultaneously and at high speeds. All-optical approaches are particularly promising since they are minimally invasive and potentially scalable to experiments interrogating thousands or millions of neurons. Conventional light-field microscopy provides a single-shot 3D fluorescence capture method with good light efficiency and fast speed, but suffers from low spatial resolution and significant image degradation due to scattering in deep layers of brain tissue. Here, we propose a new compressive light-field microscopy method to address both problems, offering a path toward measurement of individual neuron activity across large volumes of tissue. The technique relies on spatial and temporal sparsity of fluorescence signals, allowing one to identify and localize each neuron in a 3D volume, with scattering and aberration effects naturally included and without ever reconstructing a volume image. Experimental results on live zebrafish track the activity of an estimated 800+ neural structures at 100 Hz sampling rate.


Related:





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 23, 2016

Highly Technical Reference Page: Laplacian Linear Equations, Graph Sparsification, Local Clustering, Low-Stretch Trees, etc. + implementation

Here is a new Highly Technical Reference Page entitled Laplacian Linear Equations, Graph Sparsification, Local Clustering, Low-Stretch Trees, etc. by Dan Spielman. Thanks to Rich Seymour"s tweet, here is a release of  Laplacians.jl a package built by Dan and collaborators. From the page:
Laplacians is a package containing graph algorithms, with an emphasis on tasks related to spectral and algebraic graph theory. It contains (and will contain more) code for solving systems of linear equations in graph Laplacians, low stretch spanning trees, sparsifiation, clustering, local clustering, and optimization on graphs.
All graphs are represented by sparse adjacency matrices. This is both for speed, and because our main concerns are algebraic tasks. It does not handle dynamic graphs. It would be very slow to implement dynamic graphs this way.
The documentation may be found in http://danspielman.github.io/Laplacians.jl/about/index.html.
This includes instructions for installing Julia, and some tips for how to start using it. It also includes guidelines for Dan Spielman's collaborators.
For some examples of some of the things you can do with Laplacians, look at




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Slides, papers some videos: ICML, CVPR, NIPS





As you all know the proceedings for the NIPS 2016 is here. At CVPR, Yann LeCun will present the following  What's Wrong Deep Learning?. He also posted a video of Larry Jackel at the "Back to the Future" workshop at ICML.
All the papers at ICML are here. You can also follow Hugo Larochelle who streams some talks through periscope.

All ICML tutorials can be found here: They include the following:  Causal Inference for Policy Evaluation, Susan Athey  but also there:

Deep Reinforcement Learning

David Silver (Google DeepMind)

A major goal of artificial intelligence is to create general-purpose agents that can perform effectively in a wide range of challenging tasks. To achieve this goal, it is necessary to combine reinforcement learning (RL) agents with powerful and flexible representations. The key idea of deep RL is to use neural networks to provide this representational power. In this tutorial we will present a family of algorithms in which deep neural networks are used for value functions, policies, or environment models. State-of-the-art results will be presented in a variety of domains, including Atari games, 3D navigation tasks, continuous control domains and the game of Go.
[slides1] [slides2]

Memory Networks for Language Understanding

Jason Weston (Facebook)

There has been a recent resurgence in interest in the use of the combination of reasoning, attention and memory for solving tasks, particularly in the field of language understanding. I will review some of these recent efforts, as well as focusing on one of my own group’s contributions, memory networks, an architecture that we have applied to question answering, language modeling and general dialog. As we try to move towards the goal of true language understanding, I will also discuss recent datasets and tests that have been built to assess these models abilities to see how far we have come.

Deep Residual Networks: Deep Learning Gets Way Deeper

Kaiming He (Facebook, starting July, 2016)

Deeper neural networks are more difficult to train. Beyond a certain depth, traditional deeper networks start to show severe underfitting caused by optimization difficulties. This tutorial will describe the recently developed residual learning framework, which eases the training of networks that are substantially deeper than those used previously. These residual networks are easier to converge, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with depth of up to 152 layers—8x deeper than VGG nets but still having lower complexity. These deep residual networks are the foundations of our 1st-place winning entries in all five main tracks in ImageNet and COCO 2015 competitions, which cover image classification, object detection, and semantic segmentation.
In this tutorial we will further look into the propagation formulations of residual networks. Our latest work reveals that when the residual networks have identity mappings as skip connections and inter-block activations, the forward and backward signals can be directly propagated from one block to any other block. This leads us to promising results of 1001-layer residual networks. Our work suggests that there is much room to exploit the dimension of network depth, a key to the success of modern deep learning.
[slides]

Recent Advances in Non-Convex Optimization

Anima Anandkumar (University of California Irvine)

Most machine learning tasks require solving non-convex optimization.  The number of critical points in a non-convex problem grows exponentially with the data dimension. Local search methods such as gradient descent can  get stuck in one of these critical points, and therefore, finding the globally optimal solution is computationally hard. Despite this hardness barrier, we have seen many advances in guaranteed non-convex optimization.  The focus has shifted to characterizing transparent conditions under which the global solution can be found efficiently. In many instances, these conditions turn out to be mild and natural for machine learning applications. This tutorial will provide an overview of the recent theoretical success stories in non-convex optimization. This includes learning latent variable models, dictionary learning, robust principal component analysis, and so on. Simple iterative methods such as spectral methods, alternating projections, and so on, are proven to learn consistent models with polynomial sample and computational complexity.  This tutorial will present main ingredients towards establishing these results. The tutorial with conclude  with open challenges and possible path towards tackling them.

Stochastic Gradient Methods for Large-Scale Machine Learning

Leon Bottou (Facebook AI Research), Frank E. Curtis (Lehigh University), and Jorge Nocedal (Northwestern University)

This tutorial provides an accessible introduction to the mathematical properties of stochastic gradient methods and their consequences for large scale machine learning.  After reviewing the computational needs for solving optimization problems in two typical examples of large scale machine learning, namely, the training of sparse linear classifiers and deep neural networks, we present the theory of the simple, yet versatile stochastic gradient algorithm, explain its theoretical and practical behavior, and expose the opportunities available for designing improved algorithms.  We then provide specific examples of advanced algorithms to illustrate the two essential directions for improving stochastic gradient methods, namely, managing the noise and making use of second order information.
[slides1] [slides2] [slides3]

The convex optimization, game-theoretic approach to learning

Elad Hazan (Princeton University) and Satyen Kale (Yahoo Research)

In recent years convex optimization and the notion of regret minimization in games have been combined and applied to machine learning in a general framework called online convex optimization. We will survey the basics of this framework, its applications, main algorithmic techniques and future research directions.

Rigorous Data Dredging: Theory and Tools for Adaptive Data Analysis

Moritz Hardt (Google) and Aaron Roth (University of Pennsylvania)

Reliable tools for inference and model selection are necessary in all applications of machine learning and statistics. Much of the existing theory breaks down in the now common situation where the data analyst works interactively with the data, adaptively choosing which methods to use by probing the same data many times. We illustrate the problem through the lens of machine learning benchmarks, which currently all rely on the standard holdout method. After understanding why and when the standard holdout method fails, we will see practical alternatives to the holdout method that can be used many times without losing the guarantees of fresh data. We then transition into the emerging theory on this topic touching on deep connections to differential privacy, compression schemes, and hypothesis testing (although no prior knowledge will be assumed).

Graph Sketching, Streaming, and Space-Efficient Optimization

Sudipto Guha (University of Pennsylvania) and Andrew McGregor (University of Massachusetts Amherst)

Graphs ae one of the most commonly used data representation tools but existing algorithmicapproaches are typically not appropriate when the graphs of interest are dynamic, stochastic, ordo not fit into the memory of a single machine. Such graphs are often encountered as machinelearning techniques are increasingly deployed to manage graph data and large-scale graph opti-mization problems. Graph sketching is a form of dimensionality reduction for graph data that isbased on using random linear projections and exploiting connections between linear algebra andcombinatorial structure. The technique has been studied extensively over the last five years andcan be applied in many computational settings. It enables small-space online and data streamcomputation where we are permitted only a few passes (ideally only one) over an input sequence ofupdates to a large underlying graph. The technique parallelizes easily and can naturally be appliedin various distributed settings. It can also be used in the context of convex programming to enablemore efficient algorithms for combinatorial optimization problems such as correlation clustering. One of the main goals of the research on graph sketching is understanding and characterizing thetypes of graph structure and features that can be inferred from compressed representations of the relevant graphs.
[slides1] [slides2]

Causal inference for observational studies

David Sontag and Uri Shalit (New York University)

In many fields such as healthcare, education, and economics, policy makers have increasing amounts of data at their disposal. Making policy decisions based on this data often involves causal questions: Does medication X lead to lower blood sugar, compared with medication Y? Does longer maternity leave lead to better child social and cognitive skills? These questions have to be addressed in practice, every day, by scientists working across many different disciplines.
The goal of this tutorial is to bring machine learning practitioners closer to the vast field of causal inference as practiced by statisticians, epidemiologists and economists. We believe that machine learning has much to contribute in helping answer such questions, especially given the massive growth in the available data and its complexity. We also believe the machine learning community could and should be highly interested in engaging with such problems, considering the great impact they have on society in general.
We hope that participants in the tutorial will: a) learn the basic language of causal inference as exemplified by the two most dominant paradigms today: the potential outcomes framework, and causal graphs; b) understand the similarities and the differences between problems machine learning practitioners usually face and problems of causal inference; c) become familiar with the basic tools employed by practicing scientists performing causal inference, and d) be informed about the latest research efforts in bringing machine learning techniques to address problems of causal inference.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, June 21, 2016

Around The Blogs in 78 Summer hours


Andrew mentioned on his twitter feed that if one want to see his upcoming book, one has to simply register at


While ICML is underway, here are a few blog posts of notes:
Ben
Sanjeev
Fabian
Tomasz
Charles


John

Pip
Bob
Anand
Timothy

Dustin

Suresh
Mike
Muthu
Laurent

Igor


This image was taken by Rear Hazcam: Right B (RHAZ_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 1377 (2016-06-20 21:46:58 UTC).

Image Credit: NASA/JPL-Caltech



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, June 20, 2016

CVPR papers are out !



The full set of CVPR papers are out and viewable here, here is a sample that caught my attention, enjoy !:


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

SparkleGeometry: Glitter Imaging for 3D Point Tracking / Cloudmaps from Static Ground-View Video

 Making sense of the world through unusual PSFs , aka imaging with Nature, I loooooove it !



SparkleGeometry: Glitter Imaging for 3D Point Tracking by Abigail Stylianou, Robert Pless
We consider a geometric inference problem for an imaging system consisting of a camera that views the world through a planar, rectangular sheet of glitter. We describe a procedure to calibrate this imaging geometry as a generalized camera which characterizes the subset of the light field viewed through each piece of glitter. We propose an easy to construct physical prototype and characterize its performance for estimating the 3D position of a moving point light source just by viewing the changing sparkle patterns visible on the glitter sheet. 

Cloudmaps from Static Ground-View Video by Nathan Jacobs, Scott Workman, Richard Souvenir
Cloud shadows dramatically a ect the appearance of outdoor scenes. We describe three approaches that use video of cloud shadows to estimate a cloudmap, a spatio-temporal function that represents the clouds passing over the scene. Two of the methods make assumptions about the camera and/or scene geometry. The third method uses techniques from manifold learning and does not require such assumptions. None of the methods require directly viewing the clouds, but instead use the pattern of intensity changes caused by the cloud shadows. An accurate estimate of the cloudmap has potential applications in solar power estimation and forecasting, surveillance, and graphics. We present a quantitative evaluation of our methods on synthetic scenes and show qualitative results on real scenes. We also demonstrate the use of a cloudmap for foreground object detection and video editing.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, June 18, 2016

Saturday Morning Videos: Nonparametric Methods for Large Scale Representation Learning NIPS 2015 Workshop

Here are the videos of Nonparametric Methods for Large Scale Representation Learning Workshop at NIPS 2015. Enjoy !




Francis Bach (INRIA & ENS) video

Title: Sharp Analysis of Random Feature Expansions

Abstract: Random feature expansions provide a simple way to avoid the usual quadratic running-time complexity of kernel methods. In this talk, I will present recent results about the approximation properties of these expansions. In particular, I will provide improved bounds on the number of features needed for a given approximation quality.


Michael Mahoney (Berkeley) video

Title: Using Local Spectral Methods in Theory and in Practice

Abstract: Local spectral methods are algorithms that touch only a small part of a large data graph and yet come with locally-biased versions of the Cheeger-like quality-of-approximation guarantees that make the usual global spectral methods so popular. Since they touch only a small part of a large data graph, these methods come with strong scalability guarantees, and they can be applied to graphs with hundreds of millions or billions of nodes. Moreover, due to implicit regularization, they also come with interesting statistical guarantees, and they perform quite well in many practical situations. We will describe the basic ideas underlying these methods, how these methods tend to perform in  practice at identifying different types of structure in data, and how an understanding of the implicit regularization properties underlying these methods leads to novel  methods to robustify graph-based learning algorithms to the peculiarities of data preprocessing decisions.


Yee Whye Teh (Oxford) video

Title: Random Tensor Decompositions for Regression and Collaborative Filtering

Abstract: In this talk I will present some ongoing work by Xiaoyu Lu, Hyunjik Kim,  Seth Flaxman and myself on approximations for efficiently learning Gaussian processes and kernel methods. Our approximation is applicable when the kernel has Kronecker structure, but when the data need not be on a grid. The idea is to make use of random  feature expansions, low-rank tensors, and recent advances in stochastic gradient MCMC / Variational Inference / Descent. We will also present how this can be usedin a novel formualtion for collaborative filtering with side-information using Gaussian processes, arguing it is more natural than current proposals for using GPs in collaborative filtering, and showing interesting connections between our approximations and low-rank matrix factorization approaches to collaborative filtering.


Fei Sha (UCLA) video
Title: Do shallow kernel methods match deep neural networks -- and if not, what can
the shallow ones learn from the deep ones?

Abstract: Deep neural networks (DNNs) and other types of deep learning architectures have been hugely successful in a large number of applications. By contrast, kernel methods, which were exceedingly popular, have become lackluster. The crippling obstacle is the computational complexity of those methods. Nonetheless, there has been a resurgence of interest in these methods. In particular, several research groups have studied how to scale kernel methods to cope with large-scale learning problems.

Despite such progress, there has not been a systematic and head-on comparison between kernel methods and DNNs. Specifically, while recent approaches have shown exciting promises, we are still left with at least one itching unanswered question: can kernel methods, after being scaled up for large datasets, truly match DNN performance?

In this talk, I will describe our efforts in (partially) answering that question. I will present extensive empirical studies comparing kernel methods and DNNs for automatic speech recognition, a key field to which DNNs have been applied. Our investigative studies highlight the similarities and differences between these two paradigms. I will leave our main conclusion as a surprise.


Jean-Philippe Vert (Mines ParisTech & Curie Institute) Video
Title: Learning from Rankings

Abstract: In many applications such as genomics, high-dimensional data are often subject to technical variability such as noise of batch effects which are difficult to remove or model. If the variability approximately keeps the relative order of the features within each sample, then one could keep only the information of relative orders between features to characterize each sample, resulting in a representation of each sample as a permutation over the set of features. In this talk, I will discuss several new methods for supervised and unsupervised classification of such permutations, including new positive definite kernels on the symmetric groups and a new method for supervised full-quantile normalization, illustrating the benefits of these techniques on cancer patient stratification from noisy gene expression and mutation data.


Amr Ahmed (Google) Video
Title: Dirichlet-Hawkes Processes with Applications to Clustering Continuous-Time Document Streams

Abstract: Clustering in document streams, such as online news articles, can be induced by their textual contents, as well as by the temporal dynamics of their arriving patterns. Can we leverage both sources of information to obtain a better clustering of the documents, and distill information that is not possible to extract using contents only? In this talk, I will describe a novel random process, referred to as the Dirichlet-Hawkes process, to take into account boht information in a unified framework. A distinctive feature of the proposed model is that the preferential attachment of items to clusters according to cluster sizes, present in Dirichlet processes, is now driven according to the intensities of cluster-wise self-exciting temporal point processes, the Hawkes processes. This new model establishes a previously unexplored connection between Bayesian nonparametrics and temporal point processes, which makes the number of clusters grow to accommodate the increasing complexity of online streaming contents, while at the same time adapts to the ever changing dynamics of the respective continuous arrival time. Large-scale experiments on both synthetic and real world news articles showed that Dirichlet-Hawkes processes can recover both meaningful topics and temporal dynamics, which leads to better predictive performance in terms of content perplexity and arrival time of future documents.


Graph Sparsification Approaches for Laplacian Smoothing, Veeranjaneyulu Sadhanala, Video



Word, graph and manifold embedding from Markov processes, Tatsu Hashimoto, video

Image of TITAN

N00261571.jpg was taken on 2016-06-04 18:19 (UTC) and received on Earth 2016-06-05 11:23 (UTC). The camera was pointing toward TITAN, and the image was taken using the CL1 and CB3 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System.
For more information on raw images check out our frequently asked questions section.
Image Credit: NASA/JPL-Caltech/Space Science Institute


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 16, 2016

Optimization Methods for Large-Scale Machine Learning

The preprint also includes a section on Large Scale L1 regularization of interest in compressive sensing. Of note:
There are important methods that are not included in our presentation|such as the alternating direction method of multipliers (ADMM) [54,61, 64] and the expectation-maximization (EM) method and its variants [45, 153]|but our study covers many of the core algorithmic frameworks in optimization for machine learning, with emphasis on methods and theoretical guarantees that have the largest impact on practical performance.

Optimization Methods for Large-Scale Machine Learning by Léon Bottou, Frank E. Curtis, Jorge Nocedal

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly