Nuit Blanche

My name is Igor Carron
View Igor Carron's profile on linkedin

Page Views on Nuit Blanche since July 2010

My papers on ArXiv:
Approximating Kernels at the speed of Light
&
Imaging with Nature


Nuit Blanche community
@NuitBlog || Facebook || Reddit
Compressive Sensing on LinkedIn

Advanced Matrix Factorization on Linkedin
||
Attendant references pages:
The Advanced Matrix Factorization Jungle Page ||
The Big Picture in Compressive Sensing||
Learning Compressive Sensing ||
Highly Technical Reference Pages - Aggregators

Paris Machine Learning
@Meetup.com || @Archives
|| @LinkedIn ||@Facebook
|| @ParisMLGroup

Monday, October 31, 2016

Lensless Imaging with Compressive Ultrafast Sensing

Here is some very fast compressive sensing using coded aperture.


Lensless Imaging with Compressive Ultrafast Sensing by Guy Satat, Matthew Tancik, Ramesh Raskar
Conventional imaging uses a set of lenses to form an image on the sensor plane. This pure hardware-based approach doesn't use any signal processing, nor the extra information in the time of arrival of photons to the sensor. Recently, modern compressive sensing techniques have been applied for lensless imaging. However, this computational approach tends to depend as much as possible on signal processing (for example, single pixel camera) and results in a long acquisition time. Here we propose using compressive ultrafast sensing for lensless imaging. We use extremely fast sensors (picosecond time resolution) to time tag photons as they arrive to an omnidirectional pixel. Thus, each measurement produces a time series where time is a function of the photon source location in the scene. This allows lensless imaging with significantly fewer measurements compared to regular single pixel imaging (33× less measurements in our experiments). To achieve this goal, we developed a framework for using ultrafast pixels with compressive sensing, including an algorithm for ideal sensor placement, and an algorithm for optimized active illumination patterns. We show that efficient lensless imaging is possible with ultrafast imaging and compressive sensing. This paves the way for novel imaging architectures, and remote sensing in extreme situations where imaging with a lens is not possible.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/31/2016 10:19:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: CS, CSHardware

Wednesday, October 26, 2016

Practical Learning of Deep Gaussian Processes via Random Fourier Features

 

Practical Learning of Deep Gaussian Processes via Random Fourier Features by Kurt Cutajar, Edwin V. Bonilla, Pietro Michiardi, Maurizio Filippone

The composition of multiple Gaussian Processes as a Deep Gaussian Process (DGP) enables a deep probabilistic approach to flexibly quantify uncertainty and carry out model selection in various learning scenarios. In this work, we introduce a novel formulation of DGPs based on random Fourier features that we train using stochastic variational inference. Our proposal yields an efficient way of training DGP architectures without compromising on predictive performance. Through a series of experiments, we illustrate how our model compares favorably to other state-of-the-art inference methods for DGPs for both regression and classification tasks. We also demonstrate how an asynchronous implementation of stochastic gradient optimization can exploit the computational power of distributed systems for large-scale DGP learning.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/26/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: ML, RandomFeatures

Tuesday, October 25, 2016

Thesis: Spectral Inference Methods on Sparse Graphs: Theory and Applications by Alaa Saade

Congratulations Dr. Saade !
 


Spectral Inference Methods on Sparse Graphs: Theory and Applications by Alaa Saade

In an era of unprecedented deluge of (mostly unstructured) data, graphs are proving more and more useful, across the sciences, as a flexible abstraction to capture complex relationships between complex objects. One of the main challenges arising in the study of such networks is the inference of macroscopic, large-scale properties affecting a large number of objects, based solely on the microscopic interactions between their elementary constituents. Statistical physics, precisely created to recover the macroscopic laws of thermodynamics from an idealized model of interacting particles, provides significant insight to tackle such complex networks.
In this dissertation, we use methods derived from the statistical physics of disordered systems to design and study new algorithms for inference on graphs. Our focus is on spectral methods, based on certain eigenvectors of carefully chosen matrices, and sparse graphs, containing only a small amount of information. We develop an original theory of spectral inference based on a relaxation of various mean-field free energy optimizations. Our approach is therefore fully probabilistic, and contrasts with more traditional motivations based on the optimization of a cost function. We illustrate the efficiency of our approach on various problems, including community detection, randomized similarity-based clustering, and matrix completion.
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/25/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: ML, thesis

Monday, October 24, 2016

A Greedy Blind Calibration Method for Compressed Sensing with Unknown Sensor Gains



A Greedy Blind Calibration Method for Compressed Sensing with Unknown Sensor Gains by Valerio Cambareri, Laurent Jacques
The realisation of sensing modalities based on the principles of compressed sensing is often hindered by discrepancies between the mathematical model of its sensing operator, which is necessary during signal recovery, and its actual physical implementation, whose values may differ significantly from the assumed model. In this paper we tackle the bilinear inverse problem of recovering a sparse input signal and some unknown, unstructured multiplicative factors affecting the sensors that capture each compressive measurement. Our methodology relies on collecting a few snapshots under new draws of the sensing operator, and applying a greedy algorithm based on projected gradient descent and the principles of iterative hard thresholding. We explore empirically the sample complexity requirements of this algorithm by testing the phase transition of our algorithm, and show in a practically relevant instance of compressive imaging that the exact solution can be obtained with only a few snapshots.


 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/24/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: BlindDeconvolution, CS, phasediagrams

Sunday, October 23, 2016

Sunday Morning Insight: We're the Barbarians


 
From Appendix A of "Rebooting the IT Revolution: A Call to Action"

In a recent blog entry ( Predicting the Future: The Steamrollers and Machine Learning) I pointed out the current limits on the use of silicon for computing. Even though the predictions show the substantial impact of computing on power generation, there is only a scattered set of initiatives or technology development that are looking into this issue.

This was reinforced when we, at LightOn, recently filled a form to join Optics Valley, a  non-profit group representing the interest of the Optics industry here in France.. Many of our answers fell into the "Other" category. That feeling was very much reinforced last night when I watched the IEEE rebooting computing video that features a set of initiatives that aims at solving this exact problem. But if you watch the short video, you'll probably notice that our technology also falls in the "Others" category.


 
Rome errr....Silicon Valley needs a solution and  we're the Barbarians....

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/23/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: MappingMLtoHardware, ML, MLHardware, SundayMorningInsight

Friday, October 21, 2016

Job: PhD Studentships, TU Delft / Online Optimization with Costly and Noisy Measurements using Random Fourier Expansions

Sander just sent me the following:

Dear Igor,

I have two vacancies for PhD students in

Applied Nonlinear Fourier Analysis for Fiber-Optic Communication / Water Wave Analysis

here at TU Delft that I hope might be of interest to some of your readers. More information can be found in the flyer at

http://www.dcsc.tudelft.nl/~swahls/pdf/PhD_Positions_NEUTRINO.pdf

It would be great if you could post them in your (fantastic!) blog.

Best, Sander

--

Dr.-Ing. Sander Wahls
Assistant Professor at TU Delft

http://www.dcsc.tudelft.nl/~swahls

 So you'd think that Sander is just flattering me and the blog into getting a post out to hire PhD students but you'd be wrong. He does very interesting work, check this recent one:


Online Optimization with Costly and Noisy Measurements using Random Fourier Expansions by Laurens Bliek, Hans R. G. W. Verstraete, Michel Verhaegen, Sander Wahls

This paper analyzes DONE, an online optimization algorithm that iteratively minimizes an unknown function based on costly and noisy measurements. The algorithm maintains a surrogate of the unknown function in the form of a random Fourier expansion (RFE). The surrogate is updated whenever a new measurement is available, and then used to determine the next measurement point. The algorithm is comparable to Bayesian optimization algorithms, but its computational complexity per iteration does not depend on the number of measurements. We derive several theoretical results that provide insight on how the hyper-parameters of the algorithm should be chosen. The algorithm is compared to a Bayesian optimization algorithm for a benchmark problem and three applications, namely, optical coherence tomography, optical beam-forming network tuning, and robot arm control. It is found that the DONE algorithm is significantly faster than Bayesian optimization in the discussed problems, while achieving a similar or better performance.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/21/2016 12:01:00 PM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: CSjob, ML, RandomFeatures

Thursday, October 20, 2016

Paris Machine Learning, Hors série #3: Mathématiques et Data Science

 Today, we will have a special kind of meetup which will be an open question roundtable around the following people (see below). The subject will naturally be around Mathematics and Data Science. It will most probably be in French (questions can be asked in English obviously):

Cédric Villani, Vincent Lefieux, Philippe Azoulay, Mathilde Mougeot, Julie Josse, Nicolas Le Roux 

Questions can be asked on Twitter under the #MLParis tag.

The meetup is in part organized thanks to Quantmetry and in conjunction with the Mathematiques, Oxygene du numerique event. This is the streaming video but we should eventually have a more professional one later.






 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/20/2016 10:45:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: meetup, ML, MLParis, ParisMachineLearning

Wednesday, October 19, 2016

Random Projections for Scaling Machine Learning in Hardware

Conitinuing in our Mapping ML to Hardware series, here is a way to produce random projections different than the way we do at LightOn.
Random Projections for Scaling Machine Learning in Hardware by Sean Fox, Stephen Tridgell, Craig Jin and Philip H.W. Leong
Random projections have recently emerged as a powerful technique for large scale dimensionality reduction in machine learning applications. Crucially, the randomness can be extracted from sparse probability distributions, enabling hardware implementations with little overhead. In this paper, we describe a Field-Programmable Gate Array (FPGA) implementation alongside a Kernel Adaptive Filter (KAF) that is capable of reducing computational resources by introducing a controlled error term, achieving higher modelling capacity for given hardware resources. Empirical results involving classification, regression and novelty detection show that a 40% net increase in available resources and improvements in prediction accuracy is achievable for projections which halve the input vector length, enabling us to scale-up hardware implementations of KAF learning algorithms by at least a factor of 2. Execution time of our random projection core is shown to be an order of magnitude lower than a single core central processing unit (CPU) and the system-level implementation on a FPGA-based network card achieves a 29x speedup over the CPU. 



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/19/2016 04:31:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: MappingMLtoHardware, ML, MLHardware

Tuesday, October 18, 2016

Thesis: Fast Randomized Algorithms for Convex Optimization and Statistical Estimation by Mert Pilanci

Here is 230 pages of goodness from Dr. Pilanci ! Fast Randomized Algorithms for Convex Optimization and Statistical Estimation by Mert Pilanci
 
With the advent of massive datasets, statistical learning and information processing techniques are expected to enable exceptional possibilities for engineering, data intensive sciences and better decision making. Unfortunately, existing algorithms for mathematical optimization, which is the core component in these techniques, often prove ineffective for scaling to the extent of all available data. In recent years, randomized dimension reduction has proven to be a very powerful tool for approximate computations over large datasets. In this thesis, we consider random projection methods in the context of general convex optimization problems on massive datasets. We explore many applications in machine learning, statistics and decision making and analyze various forms of randomization in detail. The central contributions of this thesis are as follows: 
(i) We develop random projection methods for convex optimization problems and establish fundamental trade-offs between the size of the projection and accuracy of solution in convex optimization. 
(ii) We characterize information-theoretic limitations of methods that are based on random projection, which surprisingly shows that the most widely used form of random projection is, in fact, statistically sub-optimal. 
(iii) We present novel methods, which iteratively refine the solutions to achieve statistical optimality and enable solving large scale optimization and statistical inference problems orders-of-magnitude faster than existing methods. 
(iv) We develop new randomized methodologies for relaxing cardinality constraints in order to obtain checkable and more accurate approximations than the state of the art approaches.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/18/2016 01:08:00 PM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: ML, random projections, randomization, thesis

Monday, October 17, 2016

Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data



The introduction to the following paper is a wonderful backgrounder:
Machine Learning has gained great empirical success from the massive data sets collected from various domains. Among them a major challenge is to utilize existing computational resources to build predictive and inferential models from such huge data sets, while maintaining the statistical power of big data. One remedy for the big data challenge is to build distributed computer systems and design distributed learning algorithms to make big data learning possible, however, distributed systems may not always available, and the cost of running distributed system can be much higher than one can afford, which makes distributed learning not suitable for all scenarios. An alternative remedy is to use the state-of-the-art randomized optimization algorithms to accelerate the training process, for example, researchers have proposed optimization algorithms for regularized empirical risk minimization problem, with provable fast convergence and low computational cost per iteration (see (Johnson and Zhang, 2013; Shalev-Shwartz and Zhang, 2013; Defazio et al., 2014) for examples), however, the speed of these optimization methods still heavily depends on the condition number of problem at hand, which can be undesirable for many real world problems. Sketching (Woodruff, 2014), which approximates the solution via constructing some sketched, usually of smaller scale problem from the original data, has become an emerging technique for big data analytics. With the sketching technique, we can find solutions which approximately solve various forms of original large-scale problem, such as least square regression, robust regression, low-rank approximation, singular value decomposition, just to name a few. For survey and recent advances about sketching, we refer the readers to (Halko et al., 2011; Mahoney, 2011; Lu et al., 2013; Alaoui and Mahoney, 2014; Woodruff, 2014; Raskutti and Mahoney, 2015; Yang et al., 2015a; Oymak et al., 2015; Oymak and Tropp, 2015; Drineas and Mahoney, 2016) and references therein.
However, one major drawback of sketching is that typically it’s not suitable for the case if we want high accurate solution: to obtain a solution with exponentially smaller approximation error, we often need to increase the sketching dimension also exponentially.
The situation has become better with recent work on “iterative sketch”, e.g. iterative Hessian sketch (IHS) (Pilanci and Wainwright, 2016) and iterative dual random projection (IDRP) (Zhang et al., 2014). These methods are able to refine their approximate solution by iteratively solving some small scale sketched problem. Among these innovations, Hessian sketch (
Pilanci and Wainwright, 2016) is designed by reducing the sample size of the original problem, while dual random projection (Zhang et al., 2014) is proposed by reducing the dimension. As a consequence, when the sample size and feature dimension are both large, IHS and IDRP still need to solve relatively large-scale subproblems as they can only sketch the problem from one perspective. In this paper, we make the following improvement upon previous work: we first propose an accelerated version of IHS which requires the same computational cost to solve the IHS subproblem at each sketching iteration, while with provably fewer number of sketching iterations to reach certain accuracy; we then reveal the primal-dual connections between IHS (Pilanci and Wainwright, 2016) and IDRP (Zhang et al., 2014), which are independently proposed by two different groups of researchers. In particular, we show that these two methods are equivalent in the sense that dual random projection is performing Hessian sketch in the dual space. Finally, to alleviate the computational issues raised by big and high-dimensional learning problems, we propose a primal-dual sketching method that can simultaneously reduce the sample size and dimension of the sketched sub-problem, with provable convergence guarantees.
 Here is the paper: Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data by Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nathan Srebro
Sketching techniques have become popular for scaling up machine learning algorithms by reducing the sample size or dimensionality of massive data sets, while still maintaining the statistical power of big data. In this paper, we study sketching from an optimization point of view: we first show that the iterative Hessian sketch is an optimization process with preconditioning, and develop accelerated iterative Hessian sketch via the searching the conjugate direction; we then establish primal-dual connections between the Hessian sketch and dual random projection, and apply the preconditioned conjugate gradient approach on the dual problem, which leads to the accelerated iterative dual random projection methods. Finally to tackle the challenges from both large sample size and high-dimensionality, we propose the primal-dual sketch, which iteratively sketches the primal and dual formulations. We show that using a logarithmic number of calls to solvers of small scale problem, primal-dual sketch is able to recover the optimum of the original problem up to arbitrary precision. The proposed algorithms are validated via extensive experiments on synthetic and real data sets which complements our theoretical results.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/17/2016 04:24:00 PM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: ML, RandNLA, random projections, sketching

Job: Postdoc at Ecole Normale Supérieure (ENS Paris), France

Florent wants me to post this postdoc announcement here:

Laplace Junior Professor Chair in Data Science


Postdoctoral Program
 
The Ecole Normale Supérieure (ENS Paris) invites applications for a Junior Research and Teaching Laplace chair in data science at the postdoctoral level, funded by CFM (Capital Fund Management) and the ENS. The chair is named after Pierre-Simon, marquis de Laplace, who between many accomplishments, was one of the early founders of statistical inference and data science.
The Laplace chair aims at recruiting outstanding candidates in all areas of data sciences including theoretical statistics, machine learning, signal processing, computer science, applied mathematics and statistical physics, or working on applications to other sciences such as physics, biology, medecine or social sciences and economics.
Appointments will be for two years with a possible extension for a third year. Salary is competitive and the positions are endowed with travel resources.
The successful candidate will carry out research in ENS, with reduced teaching duties which will be adapted. Applications should consist of a single file and be send before November 30th, 2016 by email to laplacechair2017@gmail.com.
  • A cover letter ;
  • A complete CV including a list of publications ;
  • A research statement (maximum 4 pages in A4 format) taking into account possible interactions with research groups/faculty within the different department of ENS (Computer science,Mathematics, Physics,Biology, etc.) ;
  • Three letters of recommendation from senior scientists, to be sent directly by email to laplacechair2017@gmail.com.
More information about the scientific environnement of this program can be found on the webpage of the Data Science Chaire of the ENS at https://data-ens.github.io.
Short-listed candidates will be invited for an interview (video conference) in mid-January 2017.
 
 
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/17/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: CSjob, CSjobs, ML

Sunday, October 16, 2016

Sunday Morning Insight: Machine Learning in Paris this past week



The most fascinating experience we've had this past week was the meetup we organized at Vente-privée one of the largest electronic store operation in Europe. It was fascinating on many grounds. First, we got to discover the business from one of the presentation by their engineers and then got a tour of the operations by Julien, the CTO. Their growth is so amazing that they now have to use machine learning to scale up not just for their operations but also for their customers. Their CTO mentioned a few facts that got our attention: 100 000 packages delivered everyday, 20 artists in residence that produces the jingles for their stores, more than 200 electronic stores created per month. The shooting of products require fashion models but at some point their operations was so large that all fashion models in Paris were booked (except for a few tops) with them. The situation led to them to creating photorealistic rendered version of new models for their stores/campaigns. They are about to open a few R&D labs at the Epita and 42 schools and they have a lots of very interesting problems. In a way they reminded me a little bit the situation described by Chris  at the New York Times, Andrei at WalmartLabs a while back (see presentations in the archives section of the meetup). The meetup itself was somewhat different as well as regards to the presentations we had: Greg spoke to us about trying grab our interaction on social networks and use this to enhance our personnalities. This is an open project and the site is here: people2vec. Arnaud  did the very unusual thing of telling us how he did to get the best actionable dataset for his Deep Learning start-up (Regaind.io). Olivier, Ivan and Antoine detailed some of the ML work at Vente-privée, Frederico talked to us about health data on the web. We also opened a small debate with François that got some reaction from the crowd. Eventually, Clementine also mentioned a Startup Weekend on AI. All the presentations and the video of the streaming is here. This coming week, we should have a new 'Hors série' meetup organized with Quantmetry with Cedric Villani, a Field's medalist among other speakers. I am not quite sure what the format will be but you can register here to attend. As usual, it's free.
The day after the meetup, LightOn got to pitch in the semi-finals of the Hello Tomorrow Challenge. The winner of this year's edition is a flying car.
 


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/16/2016 05:57:00 PM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: ML, SundayMorningInsight

Friday, October 14, 2016

MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization - implementation -

 
 
Ramki just sent me the following:
 
Dear Igor,

We have recently opensourced our distributed NMF Library at https://github.com/ramkikannan/nmflibrary that implements different NMF algorithms such as Multiplicative Update(MU), Hierarchical Alternating Least Squares(HALS) and active set based ANLS/BPP on MPI and OpenMP. Some latest benchmarking of these algorithms on Oak Ridge Leadership Computing Facility (OLCF) super computers are presented at https://arxiv.org/abs/1609.09154. Kindly look at the paper for the topic modeling results of entire stack overflow’s 12Million questions and experiments with graph clustering of the biggest sparse matrix of size 118M from Florida Sparse Matrix Collection.

Can you please post this information in your community - matrix factorization jungle, compressed sensing google plus group and other interested communities. Let us know if you are looking for more information.

Regards,
Ramki
 Thanks Ramki ! Here is a presentation and the preprint Ramki mentions: MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization by Ramakrishnan Kannan, Grey Ballard, Haesun Park
Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets.
The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms that iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans for few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/14/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: implementation, MF, ML

Thursday, October 13, 2016

Hybrid computing using a neural network with dynamic external memory


Hybrid computing using a neural network with dynamic external memory by Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu and Demis Hassabis 
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory. 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/13/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: ML

Wednesday, October 12, 2016

Paris Machine Learning Meetup #2 Season 4: Emotional AI, Regaind, Health Knowledge....


So today is Paris Machine Learning Meetup #2 Season 4 and it'll be hosted and sponsored by Vente-privée, woohoo ! Tonight we'll be talking about Emotional AI, Training set for image in Deep Learning, Health Knowledge, ML at Vente-Privée....and we will have a new feature. A debate of sorts. The streaming video is below and the presentation should be available before the meetup.




Voici le programme pour l'instant:

  • 18:30 Franck Bardol, Igor Carron, intro  (newsletter October 2016)
  • 18:45 Clementine Delphin, Startup Weekend Artificial Intelligence (slides pdf version, 2 mins)
  • 18:55 Olivier Dolle, Ivan Vukic, Antoine Deblonde Vente-privée, Machine Learning at Vente-Privée, 
  • 19:15 Frederico Quintao, Health Knowledge Framework for the Web, 
In this presentation, I will show a framework for the classification of medical knowledge on the web, and where some current ML approaches to learn public health metrics from query logs/docs fit in the framework. This is somehow tied to my work at Google, where I was the global eng lead of the Health Search team for almost 5 years.
  • 19:35 Gregory Renard, Xbrain,  "Emotional AI", http://www.people2vec.org/
  • 20:00 Arnaud Laurenty, Regaind.io What I wish a ninja data scientist had told me before we started building our datasets :)
Building machine learning models starts by generating a high quality dataset adapted to your task. Sometimes, you get all the data from a service that is already in production, and you mostly need to analyze it and clean it thoroughly. Sometimes, you have nothing at all, or you decide that you want to start everything from scratch. 

At Regaind, we have built an artificial intelligence that understands what photos matter to people in terms of content, action and aesthetic quality. You may try it out by having fun a virtual photo coach at https://keegan.regaind.io :)

Creating our datasets has been a painful, expensive and time-consuming experience. We've paid for 20,000 hours of manual labelling and we've done mistakes along the way. We've worked with employees in CDD, with a crowdsourcing platform, and with offshore partners. In this totally unglamorous talk, we'll provide you with a humble feedback on the whole process, hoping that our experience will reduce your pain: cost, speed, quality, timing, legal issues, best practices...

  • 20:20 Debate with François Némo, "Pourquoi l'Europe est-elle absente de la guerre des plateformes ?" (pdf version)



 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/12/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: meetup, ML, MLParis, ParisMachineLearning

Tuesday, October 11, 2016

Gaussian graphical models with skggm - implementation -

Jason just sent me the following:

Hi Igor, 

I hope this message finds you well. I saw you recently ventured into startup world, fun! 

I'm writing to let you know about a new python package that Manjari Narayan and I recently published and thought it might be relevant for the Nuit Blanche community. If so, we'd love for you to post about it. Links below. 

tour: https://jasonlaska.github.io/skggm/tour 

code: https://github.com/jasonlaska/skggm 

cheers, Jason

 Sure Jason ! The page starts with: 

Gaussian graphical models with skggm



Graphical models combine graph theory and probability theory to create networks that model complex probabilistic relationships. Inferring such networks is a statistical problem in areas such as systems biology, neuroscience, psychometrics, and finance.
 The rest is here.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/11/2016 10:30:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: CS, implementation, ML

Jobs: Two Postdocs, Statistical Mechanics of Learning, Institute of Theoretical Physics (IPhT), CEA Saclay, France

Lenka just got an ERC. Congratulations ! She just asked me if I could run this announcement for two postdocs in her group. Yes, I certainly can, here is the announcement:
 
 
Opening of two postdoctoral positions on the project Statistical Mechanics of Learning

Scope of the project: Computers are now able to recognize people, to tell a dog from a cat, or to process speech so efficiently that they can answer complicated questions. This was still impossible only a decade ago. This progress is largely due to the development of the artificial “deep-learned neural networks”. Nowadays, “deep learning” is revolutionizing our life, prompting an economic battle between internet giants, creation of a myriad of start-ups and inspires many to dream about artificial intelligence in a way that previously only appeared in science fiction novels.

As attractive and performant as this is, however, many agree that deep learning is largely an empirical field that lacks a theoretical understanding of its capacity and limitations. The algorithms used to "train" these networks explore a very complex and non-convex energy landscape that eludes most of the present theoretical methodology in statistics. The behavior of the dynamics in such complicated "glassy" landscape is, however, similar to those that have been studied for decades in the physics of disordered systems such as molecular and spin glasses.

In this project we pursue this analogy and use advanced methods of disordered systems such to develop a statistical mechanics approach to deep neural networks. The goal is to bring theoretical understanding of the principles behind the empirical success of deep neural networks. We use analytic and algorithmic methods (replica, cavity method, message passing) originating in the research of spin glasses and the physics-based strategy of studying exactly solvable simplified models. We analyze their phase diagrams, associated phase transitions and related algorithmic implications (e.g. hard phases and new algorithms). On the way to our main goal of building theory of deep learning we encounter many fascinating problems of current statistics, machine learning, data and network science to which our approach contributes. We also pursue mathematically rigorous establishment of the methodology. The project is firmly based in statistical physics but flies towards various topics in computer science, signal processing, complexity theory, information theory, machine learning, combinatorics etc. We are looking for candidates with one of the following backgrounds (or a combination of the two) to join the team and work on one or more of many sub-problems related to the project.

(1) Strong background (PhD or equivalent) in statistical physics of disordered systems such as glasses, spin glasses, or interdisciplinary applications. Experience and interest in both analytical (such as the replica and the cavity method) and numerical techniques (message passing, Monte Carlo). Coding lovers with interest in computer related issues and/or machine learning particularly welcome.

(2) Strong background (PhD or equivalent) in fields related to machine learning, information theory, signal processing, data processing, computer science, statistics with a strong interest to learn more about methods from statistical mechanics that can be used to treat (albeit sometimes non-rigorously, so far ...) some problems considered as intractable in the before mentioned fields.

We offer a two year postdoctoral contract within the French CNRS, with the standard CNRS salary and benefits (full healthcare coverage for postdoc and his/her dependents, generous vacations, 16-weeks fully paid maternity leaves, free schooling from age 3 etc.). The group is based in Institute of Theoretical Physics (IPhT) in CEA Saclay (about 20 km south of Paris, well connected by frequent commuter train and buses). IPhT is one of the best and largest laboratories of theoretical physics in Europe. The group currently has the PI, 2 PhD students and one postdocs and is about to grow. We work in close collaboration with Florent Krzakala (ENS Paris) and his group (we have a joint working group, a journal club, and the seminar series Golosino) and with a number of other colleagues in the Parisian area and around the world. The position will start in September 2017 (or slightly later if justified). Interested applicants are invited to send their questions, CV and a statement of motivation and interest in the SMiLe project to the PI Lenka Zdeborová. Candidates are expected to have read some of my recent publications to get an idea of the type of work that is expected. Applications are receivable till November 30, 2016.

Contact: Lenka Zdeborová (lenka.zdeborova@gmail.com), informal inquiries are welcome.
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/11/2016 09:30:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: CSjob, CSjobs, ML

HyperNetworks - implementation -

Did I ever tell you GitXiv is the most awesomest page on the interweb ?....yes, I think I did. Here is one of its latest entry with the link to the code/implementation and to the author's blog that talks about what he did and much much more. Go read the entry, I'll wait.


HyperNetworks by David Ha, Andrew Dai, Quoc V. Le

This work explores hypernetworks: an approach of using a small network, also known as a hypernetwork, to generate the weights for a larger network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-art results on a variety of language modeling tasks with Character-Level Penn Treebank and Hutter Prize Wikipedia datasets, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
By Igor at 10/11/2016 12:00:00 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
labels: implementation, ML
Newer Posts Older Posts Home
View mobile version
Subscribe to: Posts (Atom)

Printfriendly

About Igor

About Igor
Nuit Blanche community:
@NuitBlog, Facebook and Reddit
Compressive Sensing on LinkedIn
Advanced Matrix Factorization on Linkedin

Paris Machine Learning
Meetup.com, archives, LinkedIn , Facebook,
@ParisMLGroup

LightOn
Nuit Blanche: Stemming the Data Tsunami One Algorithm at a Time.

About Nuit Blanche

Nuit Blanche is a blog that focuses on Compressive Sensing, Advanced Matrix Factorization Techniques, Machine Learning as well as many other engaging ideas and techniques needed to handle and make sense of very high dimensional data also known as Big Data.

[ "Nuit Blanche" is a french expression that translates into "all nighter" or "restless night".]

Popular Posts

  • Compressed Sensing: How to wow your friends.
    Here is a short example showing the power of Compressed Sensing. I modified a small program written by Emmanuel Candes from his short cour...
  • ModernBERT: Smarter, Better, Faster and with Longer context
    🎄 Just in time for the magical week 🎅: LightOn and Answer.AI just made available a new model called ModernBERT . ModernBERT is available...
  • Large Language Models and Transformers (Videos, Simons Institute for the Theory of Computing)
    As some of you may know, LightOn has built a few Large Language Models, and we are now making them usable to Enterprise customers. In the m...
  • Single-pixel 3D imaging with time-based depth resolution / On-chip spiral spectrometer
      Two Compressive Sensing Hardware today: Single-pixel 3D imaging with time-based depth resolution by Ming-Jie Sun , Matthew. P. Edga...
  • sFFT: Sparse Fast Fourier Transform
    Remember the new  FFT algorithm out of MIT that uses the sparsity of the signal to perform a Faster FFT ?  Well, the authors ( Dina Katabi ...
Cassini / Juno / MSL / Oppy / HiRISE / SOHO / SDO /Rosetta-Philae / Rosetta / Ceres-Dawn / New Horizons / DSCOVR
Contact:
  • igorcarron@gmail.com,
  • on the Web
  • on LinkedIn
  • on Twitter

Pages

  • Home
  • Reproducible Research ( implementations )
  • Randomized Numerical Linear Algebra (RandNLA)
  • Advanced Matrix Factorization
  • Learning Compressed Sensing
  • It's CAI, Cable And Igor's Adventures in Matrix Factorization
  • Machine Learning Meetups Around the World
  • Compressed Sensing Pages
  • Focused Interest Pages
  • Datasets and Challenges
  • Nuit Blanche Conversations
  • Linking to Nuit Blanche
  • my other blogs
  • CS Meetings
  • Real Time Experiments
  • Recent Nuit Blanche entries
  • Paris Machine Learning Meetup Archives

Pinterest Boards

* Imaging With Nature * These Technologies Do Not Exist * Wondering Star * Computational Photography

Subscribe to the LinkedIn Matrix Factorization Group

Subscribe to the LinkedIn Matrix Factorization Group
There are 1080 members right now, be one of them. (Link to stats)

Subscribe to the LinkedIn Compressive Sensing group

Subscribe to the LinkedIn Compressive Sensing group
There are 3389 members right now. Be one of them. (Link to stats)

Nuit Blanche QR code

Nuit Blanche QR code
View Igor Carron's profile on LinkedIn

Nuit Blanche Referenced in the Dead Tree World!

  • The Big Picture in Compressive Sensing was mentioned in an article of La Recherche, the french speaking equivalent/competitor to Science. October 2010 issue, page 20-21.
  • Wired Magazine had a piece on Compressed Sensing featuring links to this blog and the Big Picture. (March 1, 2010)
  • Emmanuel Candes and Terry Tao wrote about Nuit Blanche in the Dec. '08 issue of the IEEE Information Theory Society Newsletter
  • Xiaochuan Pan, Emil Sidky and Michael Vannier wrote about Nuit Blanche in Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?.
  • Check also the acknowledgments in this Ghost Imaging paper and this one.

If You Like It, Link To It

<b>If You Like It, Link To <a href="http://nuit-blanche.blogspot.com">It</a></b>
  • Xi'an's Og
    BayesComp 2025.3 - The second day of the conference started with a cooler and less humid weather (although this did not last!), although my brain felt a wee bit foggy from a ...
    12 hours ago
  • Herve This
    Décidément, le mot de chlorophylle mériterait de disparaître du vocabulaire culinaire. - Plus j'y pense, plus je crois que le nom chlorophylle n'est pas à sa place en cuisine . En effet, ce mot fut introduit par les deux chimistes Caventou et P...
    18 hours ago
  • The Endeavour
    Deleting vs Replacing Names - This post looks at whether you should delete names or replace names when deidentifying personal data. With structured data, generating synthetic names do...
    20 hours ago
  • Image Sensors World
    Artilux and VisEra metalens collaboration - News release: https://www.artiluxtech.com/resources/news/1023 Artilux, the leader of GeSi (germanium-silicon) photonics technology and pioneer of CMOS (c...
    1 day ago
  • What's new
    Decomposing a factorial into large factors (second version) - Boris Alexeev, Evan Conway, Matthieu Rosenfeld, Andrew Sutherland, Markus Uhr, Kevin Ventullo, and I have uploaded to the arXiv a second version of our pap...
    2 weeks ago
  • Libres pensées d'un mathématicien ordinaire
    S3, Archimedes, SU2, and the semicircle - This post is about the semicircle distribution, and the way it appears in special unitary $2\times 2$ matrices. The link passes through the sphere $S^3$…
    2 months ago
  • KinectHacks.net
    Menang Mudah Main Slot Online di Agen Sbobet Online - Modal murah yang disediakan oleh link slot online tidak hanya membantu pemain untuk memulai dengan mudah, tetapi juga memungkinkan mereka untuk terus berma...
    2 months ago
  • The Geomblog
    Standing up for Science - * It's been forever since I've written a blog post. Twitter, and then X, and then Bluesky, has absorbed most of my hot takes. But I think more and more t...
    3 months ago
  • Machine Learning (Theory)
    Headroom for AI development - (Dylan Foster and Alex Lamb both helped in creating this.) In thinking about what are good research problems, it’s sometimes helpful to switch from what is...
    3 months ago
  • La vertu d'un LA
    The virtue of an A - A fortunate hive
    Agentivité augmentée, intelligence artificielle - "Agentivité augmentée", ou "intelligence artificielle" ? On le sait, l'expression "intelligence artificielle" est dans tous les esprits :-). Ces "science...
    3 months ago
  • An Ergodic Walk
    Dorfman, Warner, and the (false) stories we tell - I’ve been thinking about reviving the blog and as maybe a way of easing back in I’ve come up with some short post ideas. As usual, these are a bit half-bak...
    4 months ago
  • Gödel's Lost Letter and P=NP
    - new theory
    5 months ago
  • Machine Learning
    WeightWatcher, HTSR theory, and the Renormalization Group - There is a deep connection between the open-source weightwatcher tool, which implements ideas from the theory of Heavy Tailed Self-Regularization … More
    5 months ago
  • Harvest Imaging Blog
    Goodbye 2024, Hello 2025 - Incredible to belief that another year has passed by. Everything is moving so quickly, at least for us who are living in peace. Other people are facing o...
    6 months ago
  • High Noon GMT | Oh, to be torn 'twixt love an' tenure
    Digging into MusiCNN, pt. 12 - In part 1, I review the MusiCNN system, and in part 2 I review the architecture of the “spectrogram”-based model trained on 3-second audio segments. In par...
    2 years ago
  • Terahertz Technology
    What role is Luna Innovations terahertz division playing in the new Northrup Grumman B-21 Raider? - Last week investors in Luna Innovations, (LUNA), were excited to learn that a new multi-year, multi-million dollar contract had been entered into wit...
    2 years ago
  • Off the convex path
    Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Networks - The ability of large neural networks to generalize is commonly believed to stem from an implicit regularization — a tendency of gradient-based optimizati...
    2 years ago
  • Mr. Vacuum Tube
    Bonzer Mark 10 Radar Altimeter Repair and Demo -
    3 years ago
  • Decision Science News
    The SJDM Newsletter is ready for download - SOCIETY FOR JUDGMENT AND DECISION MAKING NEWSLETTER The quarterly Society for Judgment and Decision Making newsletter is available for download from the ...
    4 years ago
  • Victoria Stodden
    Coronavirus diaries – Champaign edition - Illinois locked down relatively early, when a statewide order was issued starting March 21, 2020. Like every lockdown we no longer went to work, restaurant...
    4 years ago
  • Petros T. Boufounos
    MERL’s Sensing Team is hiring - We are happy to announce a new full-time opening in MERL’s Computational Sensing team. As I always like to mention, MERL is a great place to work. We have ...
    5 years ago
  • Walking Randomly
    Hypot – A story of a ‘simple’ function - My stepchildren are pretty good at mathematics for their age and have recently learned about Pythagora’s theorem $c=\sqrt{a^2+b^2}$ The fact that they have...
    5 years ago
  • I’m a bandit
    A decade of fun and learning - I started out this decade with the project of writing a survey of the multi-armed bandit literature, which I had read thoroughly during the graduate stud...
    5 years ago
  • Epistasis Blog
    The Human Pancreas Analysis Program (HPAP) - We are assisting with the bioinformatics support for The Human Pancreas Analysis Program (HPAP) which consists of two interlocking, collaborative projects...
    5 years ago
  • my slice of pizza
    On and Off Travel - On why I don't like to travel to Hawaii in winters: It is basically Seattle West. On why mice and birds don't like Cats: bad reviews! I saw the Harlem Nu...
    5 years ago
  • Pillow Lab Blog
    Attention is all you need. (aka the Transformer network) - No matter how we frame it, in the end, studying the brain is equivalent to trying to predict one sequence from another sequence. We want to predict complic...
    5 years ago
  • Another Word For It
    How-To Black Box Google’s Algorithm of Oppression - Safiya Noble’s Algorithms of Oppression highlights the necessity of asking members of marginalized communities about their experiences with algorithms. I c...
    5 years ago
  • no free hunch
    Triple GM Abhishek Thakur Answers Qs from the Kaggle Community - Last week we crowned the world’s first-ever Triple Grandmaster, Abhishek Thakur. In a video interview with Kaggle Data scientist Walter Reade, Abhishek ans...
    5 years ago
  • Some Thoughts on a Mysterious Universe
    The Future of Protein Science will not be Supervised - But it may well be semi-supervised. For some time now I have thought that building a latent representation of protein sequence space is a really good idea,...
    6 years ago
  • A Neighborhood of Infinity
    Why is nuclear fusion so hard? - Why does water fall out of an inverted cup? Before considering nuclear fusion, let's consider something much more familiar. If you turn a cup full of wate...
    6 years ago
  • The Secrets of Consulting
    What motivated you to learn to code? - I received an interesting question, today: "What motivated you to learn to code?" Maybe it's old age, but my memory for ancient events seems to have impr...
    6 years ago
  • Follow the Data
    Model explanation followup – anchors, Shapley values, counterfactuals etc. - Last year, I published a blog post about model explanations (a term I will use interchangeably with “model interpretation” here, although there might be so...
    6 years ago
  • G-media | Le blog | Nouveautés développeurs, domotique et openpicus
    Domoteam 2018 by Giga concept - Le 24 mai 2018 à Angerville, la société Giga-concept organisait la 5e édition des journées Domoteam. Présentation des produits et des innovations au serv...
    7 years ago
  • natural language processing blog
    Many opportunities for discrimination in deploying machine learning systems - A while ago I created this image for thinking about how machine learning systems tend to get deployed. In this figure, for Chapter 2 of CIML, the left co...
    7 years ago
  • tombone's blog
    DeepFakes: AI-powered deception machines - Driven by computer vision and deep learning techniques, a new wave of imaging attacks has recently emerged which allows anyone to easily create highly real...
    7 years ago
  • 0xDE
    All You Need To Know About Dog Ear Infections - Otitis externa, better known as a dog's ear infections, are one of the most common health issues that plague our canine compadres today. They are also pa...
    7 years ago
  • Adventures in Signal Processing and Open Science
    Collaborative live-coding in class - Now, I did not just want to keep it to live-coding for the students. I wanted to engage the students themselves in the process in a collaborative manner. M...
    7 years ago
  • trekkinglemon's fresh squeeze | Data processing, all in flashy yellow
    On drawings - You see limits? I see creativity, so unbounded it is challenging, overwhelming. It eats up space and time and no one helps. It has no limits. Continue read...
    7 years ago
  • Machine Learning, etc
    Optimizing deeper networks with KFAC in PyTorch. - Medium post. (I'm getting too much comment spam on Blogger, so I'll probably use medium/something else from now on, and just link here)
    7 years ago
  • Timothy Lottes
    Last Blogger Post - *EDIT: Fixed the link, moving the blog one last time...* Existing blog has been fully migrated to the Github Pages site below, working on bringing back old ...
    8 years ago
  • Le Petit Chercheur Illustré
    There is time for dithering in a quantized world of reduced dimensionality! - I’m glad to announce here a new work made in collaboration with Valerio Cambareri (UCL, Belgium) on quantized embeddings of low-complexity vectors, such as...
    8 years ago
  • ChapterZero
    They’re coming to get you, Barbara - How does this not have more views (listens, really)?
    9 years ago
  • robots.net
    The Sun Sets on Robots.net - I made the first post on Robots.net more 15 years ago on 25 February 2001. Since then, rog-a-matic, The Swirling Brain, and steve have written more than 3...
    9 years ago
  • Stupid Matlab Hacks
    Diabetes - Foods To Eat And Avoid - What one eats is very important from the point of view of his health. The chief concern while monitoring and controlling diabetes is to see that the sugar...
    9 years ago
  • Inductio Ex Machina
    Predictive APIs and Apps Conference - I’m very happy to be involved with this year’s Predictive APIs and Apps conference, which is held in Sydney this year. We’ve got a great line up of speak...
    9 years ago
  • Lousodrome
    Les sourires - Parfois il y a le sourire resté sans réponse. On croise le regard, on tente d’être celui qui commence, mais la réaction ne vient pas, ou elle n’est faite...
    9 years ago
  • Neurevolution
    Neurevolution relaunch - It’s hard to believe we started this blog over eight years ago – all the way back when we were grad students. What a long way we’ve come. Patryk is now Dir...
    10 years ago
  • Ars Mathematica
    Nine Chapters on the Semigroup Art - While Googling something or other, I came across Nine Chapters on the Semigroup Art, which is a leisurely introduction to the theory of semigroups. (While ...
    10 years ago
  • yellow noise
    (takis, champs magnetiques,palais de tokyo, february 2015.) - (takis, champs magnetiques, palais de tokyo, february 2015.)
    10 years ago
  • the polylogblog
    CPM 2015 - Ely asked me to remind everyone that the deadline for the 26th Annual Symposium on Combinatorial Pattern Matching is fast approaching. You have until 2nd F...
    10 years ago
  • Mirror Image » How Kinect depth sensor works – stereo triangulation?
    Some simple ways to speed up convnet a little - There are quite a few complex methods for making convolutional networks converge faster. Natural gradient, dual coordinate ascent, second order hessian fre...
    10 years ago
  • Building Intelligent Probabilistic Systems
    Harvard Center for Research on Computation and Society: Call for Fellows and Visiting Scholars - The Harvard Center for Research on Computation and Society (CRCS) solicits applications for its Postdoctoral Fellows and Visiting Scholars Programs for the...
    10 years ago
  • The Information Structuralist
    Information flow on graphs - Models of complex systems built from simple, locally interacting components arise in many fields, including statistical physics, biology, artificial intell...
    11 years ago
  • Ivan Oseledets homepage
    Introduction into cross approximation - I have written a very short and incomplete introductory page into skeleton decomposition and using the maxvol algorithm. I will update it with references a...
    11 years ago
  • i2pi.com
    Out of focus. - Out of focus.
    11 years ago
  • brain + map + statistics
    The connection between Linus Pauling and fMRI - When I think of Linus Pauling, two things come to mind: his work on the nature of the chemical bond (for which he was awarded the 1954 Nobel Prize in Che...
    11 years ago
  • BlackbordRMT
    ___________________________________________________________________ Rapid Visual Inventory &... - *___________________________________________________________________* *Rapid Visual Inventory & Comparison of Complex 3D Structures* by *Graham Johnson*,...
    11 years ago
  • Doyung
    Pig + Hive, not Pig or Hive - Hadoop ecosystem에서 Data processing 을 위해서 Pig와 Hive가 탄생한지도 시간이 많이 되었다. 많은 사람들(??)이 Pig와 Hive를 사용하지만 대부분 둘 다 비슷한거 아니냐는 질문을 많이 하고(심지어 팀에서는 pig와 hive로 같은 data ...
    12 years ago
  • Brain Windows
    GCaMP6 plasmids at addgene - GCaMP6 variants are on addgene. Three flavors, fast kinetics or big signals. Bigger responses than OGB-1, some are MUCH bigger. The responses to drifting ...
    12 years ago
  • Science in the Sands
    Science in the Sands has moved - Dear readers, I have migrated this blog, along with other things, to my new site: http://davidketcheson.info New posts will no longer appear here on blo...
    12 years ago
  • Journey into Randomness
    SMC sampler - monte carlo, sequential monte carlo, SMC sampler, Jasra, Doucet, Del Moral
    12 years ago
  • Hao's TechBlog
    Sampling Rate is Not Only about Pixels: How to compare the sampling rate between your camera and "single-pixel" camera - At the beginning, I'd like to make clear two terms: "Nyquist frequency" and "Nyquist rate". Some of you may take them as the same thing, and even some text...
    12 years ago
  • On Another Dimension
    The Sudanese Elephant and the Blind Men - This is a simple story. The first aim is to test wordpress publishing to facebook. The story uses the well known Blind Men and an Elephant. Once upon a t...
    12 years ago
  • bpchesney.org
    Nested Linear Programs to Find Agreement among Sensors - Suppose you have 3 sensors: A, B, C. Each has a set of readings that goes into a column of a matrix, b: A matrix A is constructed such that bA, bB, bC repr...
    12 years ago
  • How I Am Becoming An Astronaut
    Where are all the Orbiters going? - This month NASA has commenced the delivery of its four Space Shuttle orbiters to their final destinations. After an extensive decommissioning process, the ...
    13 years ago
  • FUTUREPICTURE
    Note on comments. - About six months ago, we were hit with a serious rash of spam and had over 20,000 comments posted in the span of about two weeks. Unfortunately, at that ti...
    13 years ago
  • Collective for Research in Interaction, Sound, and Signal Processing
    The Sonification Handbook - For those that have not yet heard: The Sonification Handbook edited by Thomas Hermann, Andy Hunt, John G. Neuhoff is published. And, even better, freely av...
    13 years ago
  • Martin Tall On Gaze Interaction
    Low Cost Eye Tracking for Commercial Gaming: eyeAsteroids and eyeShoot in the Dark! - Latest update from Stephen Vickers and Howell Istance at the Centre for Computational Intelligence, De Montfort University who have been doing research and...
    13 years ago
  • inspiration, etc...
    An Ensemble of My Paintings @ NextSpace San Jose - On view until the end of 2011. *NextSpace* Coworking + Innovation San Jose 97 S. 2nd St, Suite 100San Jose, CA 95113 *Open Monday to Friday, from 8:30a...
    13 years ago
  • Cognitive Radio Blog. G.Vazquez-Vilar
    PhD Thesis: Interference Management in Cognitive Radio - [image: Thesis on Cognitive radio.]My dissertation examination took place a couple of weeks ago. Happily I passed... Now I can say that one of the unexpect...
    13 years ago
  • OISblog
    Liquid Crystal Eyeglasses - Pixel Optics has introduced Empower eyeglasses, which use liquid crystal lenses to actively adjust power.
    14 years ago
  • Big Numbers
    Going to be out of commission - I’m going to have to focus completely on school for a while so this blog is going on hiatus for a few months. I’ll start up again when I’ve passed the hurd...
    14 years ago
  • Electrons and holes
    Indefinite hiatus - This blog is put on indefinite hiatus.
    14 years ago
  • YALL1: Your ALgorithms for L1
    Announcements - June 4, 2010: Toeplitz/circulant sampling demos are released. June 4, 2010: YALL1 version 1.0 is released. It is now open-source. Download link. YALL1 is n...
    15 years ago
  • Hashimoto Laboratory's Blog
    Personal Mobility to the Next Level - Improving the concept of self-balancing unicycle, Honda has introduced the brand new U3-X as their new personal mobility platform. The regular large whee...
    15 years ago
  • Lianlin Li's Compressive Sensing blog:一缕清风 (in Chinese)
    Igor's blog - ʱ���������죬�������ڵij���ת�۸�һ�����ˡ����λع������˺ü��������ѣ��ñ��ط��ĸо����ᣠ����ѧУ��û����ȫ��ѧ���Ҹ�Ӫҵ�Ŀ��ȵ껹�е����ѣ��������ˣ���ͼ�������ÍIgor����һ���������...
    15 years ago
  • Guan Gui's blog
    How to expoit channel structure? -
    16 years ago
  • Herr Strathmann. - home
    -
  • Camdp.com updates
    -
  • De Rerum Natura
    -
  • Freakonometrics
    -
  • Blog | My Robotics
    -
  • Pixel shaker
    -
  • Marcio Marim
    -
  • MAKE Magazine
    -
  • A Little Knowledge
    -
  • ML Counterexamples Pt.2 - Regression Post-PCA | camdp.com/blogs
    -
  • Three-Toed Sloth
    -
  • Latest News
    -
  • Computers don't see
    -
  • Espace Vide
    -
  • Chaotic Pearls (in Indonesian)
    -
  • CyberGi
    -
  • Gaël Varoquaux
    -
  • Arthur Charpentier
    -
  • Willow Garage Blog | Willow Garage
    -
  • Computational Information Geometry Wonderland
    -
Show 25 Show All

Another Blog List

  • arg min blog
    Probability Is Only A Game - Ruthless subjective probability without Bayes' Rule
    1 day ago
  • Property Testing Review
    News for May 2025 - Apologies for the delay in publishing this digest. May has seen a lot of activity in property testing, with quite a lot from the quantum side: 8 papers in ...
    6 days ago
  • Scientific Clearing House
    COVID exposed liberalism’s greatest gap - I argued in a post four years ago (see here) that Western Liberalism is inherently conflicted. By Liberalism, I mean the modern continuation of the philoso...
    1 week ago
  • leon.bottou.org news
    two_lessons_from_iclr_2025 - Two lessons from ICLR 2025 Machine learning conferences nowadays are too large for my enjoyment. I made the trip to Singapore for two posters and a talk in...
    1 month ago
  • Information Processing
    Information Processing (this blog) has moved to Substack! - Thanks to Google and Blogspot for many happy years hosting this blog. However, Google has gradually stopped supporting this platform.The engineers at Subst...
    1 year ago
  • 大トロ
    Collective Intelligence for Deep Learning: A Survey of Recent Developments - *We survey ideas from complex systems such as swarm intelligence, self-organization, and emergent behavior that are gaining traction in ML. (Figure: Emer...
    2 years ago
  • Off the convex path
    Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Networks - The ability of large neural networks to generalize is commonly believed to stem from an implicit regularization — a tendency of gradient-based optimizati...
    2 years ago
  • tombone's blog
    DeepFakes: AI-powered deception machines - Driven by computer vision and deep learning techniques, a new wave of imaging attacks has recently emerged which allows anyone to easily create highly real...
    7 years ago
  • Relax and Conquer
    Mathematics of Data Science course at NYU Courant - I am teaching a Mathematics of Data Science PhD level course at NYU Courant this Fall, I’ll be posting new Open Problems in this blog! See more info here.
    8 years ago
  • Deep Learning
    MILA is Hiring Two Software Engineers - MILA lab from University of Montreal is looking for a software developer and another software developer with machine learning experience to hire. Those s...
    8 years ago
  • Blog for the Interphase Transport Phenomena Laboratory at Texas A and M University
    DoVR Parabolic Flight a Success -
    9 years ago
  • AK Tech Blog
    An Entity Resolution Primer - Author’s Note: Hello Reader! My name is Jonathan Armoza and I am a data science intern at Neustar and a PhD candidate in English Literature at New York Uni...
    9 years ago
  • Haldane's Sieve
    Accelerating Wright-Fisher Forward Simulations on the Graphics Processing Unit - Accelerating Wright-Fisher Forward Simulations on the Graphics Processing Unit David S. Lawrie bioRxiv doi: http://dx.doi.org/10.1101/042622 Forward Wright...
    9 years ago
  • Mostly linguistically computational: Adventure in collaborative filtering, information retrieval, matrix factorization and other stuff
    Does SGNS (Word2Vec) encodes word frequency ? - I’ve been wondering for a while about the behaviour of SGNS vs SVD/PMI with regards to the difference of performance on analogy questions. (Have a look at ...
    9 years ago
  • jim@learning
    How to choose a mentor? - http://www.cell.com/neuron/fulltext/S0896-6273(13)00907-0
    11 years ago
  • BayesRules
    STAT 330 November 29, 2012 - We finished off the discussion of model selection/averaging. We then went into missing data models. In some cases the fact that data are not observed can h...
    12 years ago
  • Math... What Is It Good For? | Just another WordPress site
    -
  • Moody Rd
    -
  • Comments on: Lecture Schedule: Embryo Physics Course
    -

My Blog List

  • FlowingData
    Feelings when strangers talk for 30 minutes - For the Pudding, Alvin Chang uses the CANDOR corpus to explore our feelings… *Tags:* Alvin Chang, conversation, Pudding, strangers
    1 hour ago
  • Science-Based Medicine
    Go Ahead, Make the Case that Science, Free Speech, and the NIH are Thriving Under Dr. Jay Bhattacharya - Many smart people reassured us that Dr. Jay Bhattacharya was both qualified to run the NIH and motivated to make it a better place. They should make the ...
    3 hours ago
  • Statistical Modeling, Causal Inference, and Social Science
    The Groovy Grandma Problem in survey research - This came up in our Xbox survey and I’ve used the phrase many times but not in writing. I’m posting it here so we have something to point to when we includ...
    13 hours ago
  • Quomodocumque
    2009 AL ROY nonprescience - Just about 16 years ago today I blogged: The Orioles have two legitimate Rookie of the Year candidates and neither one of them is named Matt Wieters “I’m t...
    15 hours ago
  • Retraction Watch
    Editors won’t retract talc and cancer article J&J says is false in court - A journal will not retract a paper linking use of talc-based baby powder to cancer, despite legal pressure from the pharmaceutical giant that made the prod...
    21 hours ago
  • olimex
    SNS-GESTURE is I2C hand gesture recognizer module which you can easily interface to your next Arduino or MicroPython project - SNS-GESTURE require 5V to operate correctly this is why we use ESP32-EVB with UEXT-3TO5V to interface it. The code is very simple: SNS-GESTURE can help you...
    4 days ago
  • Skulls in the Stars
    The Tripods: The White Mountains, by John Christopher - Book 17 for my 2025 goal of 30 books for the year! As is now default for me, my link to the book is through my bookshop dot org affiliate account. Before t...
    4 days ago
  • What's new
    Decomposing a factorial into large factors (second version) - Boris Alexeev, Evan Conway, Matthieu Rosenfeld, Andrew Sutherland, Markus Uhr, Kevin Ventullo, and I have uploaded to the arXiv a second version of our pap...
    2 weeks ago
  • regularize
    Proving existence of the SVD directly without using the spectral theorem for selfadjoint compact operators - It always bugged me that I had to rely on the spectral theorem for compact self-adjoint operators to prove the existence of the singular value decompositio...
    1 month ago
  • next big thing syndrome
    Visualization of Cuckoo hashing -
    2 months ago
  • Ben Krasnow
    f/0.38 camera lens made with oil immersion microscope objective - I removed the protective glass from a CMOS image sensor, and used optical immersion oil to couple the bare image sensor to a 40X NA=1.3 microscope object...
    4 months ago
  • Hey...What's the BIG idea?
    How Burlwood Cellars Brut Glowing - It is a popular alternative for special events and is often given as a present. In the world of glowing wine, the time period brut Champagne refers to a ...
    2 years ago
  • Passive Vision
    Light-Camera-Glitter Simulation - In one of my last blog posts, I discussed the preliminary results of trying to optimize for both the camera & light locations simultaneously. I have been w...
    4 years ago
  • Information Transfer Economics
    Market updates for a bad week - Now I don't really look at the information equilibrium models for markets as particularly informative (looking at the error band spreads should tell you a...
    5 years ago
  • Short, Fat Matrices
    Polymath16, fifteenth thread: Writing the paper and chasing down loose ends - This is the fifteenth “research” thread of the Polymath16 project to make progress on the Hadwiger–Nelson problem, continuing this post. This project is a ...
    5 years ago
  • JeremyBlum.com
    Announcing the Second Edition of “Exploring Arduino” - I'm thrilled to announce that the second edition of Exploring Arduino, my popular book, is now available! Learn more at ExploringArduino.com.
    5 years ago
  • Gowers's Weblog
    Advances in Combinatorics fully launched - It’s taken longer than we originally intended, but I am very happy to report that Advances in Combinatorics, a new arXiv overlay journal that is run along ...
    5 years ago
  • Large Scale Machine Learning and Other Animals
    Registration is open - Deep Learning Autumn School at Bar Ilan University - My friend Ely Porat sent me the following registration notice for the deep learning and AR/VR Autumn courses: https://sites.google.com/datalab.cs.biu.ac.il/...
    5 years ago
  • Sage: Open Source Mathematics Software
    Should I Resign from My Full Professor Job to Work Fulltime on Cocalc? - Nearly 3 years ago, I gave a talk at a Harvard mathematics conference announcing that “I am leaving academia to build a company”. What I really did is go ...
    6 years ago
  • Various Consequences
    Fun with Machines that Bend - I really like his 3-D printed titanium part at about the 8 minute mark, and the chainsaw clutch at minute 10 is pretty neat too. The eight "P's" of compl...
    6 years ago
  • Dan's Blog
    Now that it’s been several months since graduating with my... - Now that it’s been several months since graduating with my Physics PhD, it’s time to try something new. Here is a project I’ve been working on, which com...
    6 years ago
  • InnoCentive > Challenges
    Instant Inflation Systems for Stand-Up Paddle Boards - Inflatable Stand-Up Paddles (SUP) have provided great flexibility to enthusiasts and allowed the sport to grow in popularity. However, manually or electr...
    6 years ago
  • F. Pedregosa
    Notes on the Frank-Wolfe Algorithm, Part II: A Primal-dual Analysis - This blog post extends the convergence theory from the first part of my notes on the Frank-Wolfe (FW) algorithm with convergence guarantees on the primal...
    6 years ago
  • Dick Gordon's blog
    Cosmic Embryo #8: What's Swimming On/In My Eyes? - I woke up from a nap, on my back, looking up through my bedroom window at a brightly lit blue sky and saw them again: numerous small, uniform dots swimming...
    6 years ago
  • tcs math - some mathematics of theoretical computer science
    Metrical task systems on a weighted star - Notes for the third lecture Metrical task systems on a weighted star are up.
    7 years ago
  • So much to do, so little time
    Call for Papers (ACS Fall Meeting 2018) - The More the Merrier: Combine Drugs Together 256th ACS National Meeting Boston, August 19-23, 2018 CINF Division Dear Colleagues, we are organizing a sympo...
    7 years ago
  • 2Physics
    Probing Particle Exchange Symmetry Using Interference - Authors of the PRL Paper [1]: From Left to Right (top row) Adrian J. Menssen, Alex E. Jones, Benjamin J. Metcalf, Malte C. Tichy; (bottom row) Stefanie Bar...
    7 years ago
  • Richard Baraniuk
    National Academy of Inventors - Richard Baraniuk, Rice University’s Victor E. Cameron Professor of Electrical and Computer Engineering, has been elected a Fellow of the National Academy o...
    8 years ago
  • SynBioFromLeukipposInstitute | Scoop.it
    Cambridge Synthetic Biology Meetup - Café Synthetique is the monthly meetup for the Cambridge synthetic biology community with informal talks, discussion and pub snacks. See it on Scoop.it, v...
    8 years ago
  • Seth's blog
    Seth Roberts Community on Facebook - A new group on Facebook has been set up to continue discussions on topics that Seth covered: personal science, self-experimentation, scientific method, and...
    9 years ago
  • The Dirac Sea
    Aprendizaje más allá del aula - mgsroom: Cuando pensamos en capacitación usualmente nuestra mente nos lleva de vuelta a las experiencias escolares. A nivel profesional, no hay más que u...
    9 years ago
  • Inductio Ex Machina
    Predictive APIs and Apps Conference - I’m very happy to be involved with this year’s Predictive APIs and Apps conference, which is held in Sydney this year. We’ve got a great line up of speak...
    9 years ago
  • Zhilin's Scientific Journey
    5 Reasons Apache Spark is the Swiss Army Knife of Big Data Analytics - From: https://datafloq.com/read/5-ways-apache-spark-drastically-improves-business/1191 We are living in exponential times. Especially if we are talking abo...
    10 years ago
  • Machine Vision 4 Users
    Job for a rule-loving engineer? - As an engineer you don't have much of a career path, unless you want to go into management. (I advise against it.) So as a way of recognizing professional...
    10 years ago
  • Welcome to My Sparse Land
    Discreteness and Sparsity - Discrete signals may be sparse, whereas sparse signals may not be discrete. Let us consider the following signal: * x* = [1,1,-1,1,-1,-1,1,1]; This is a...
    10 years ago
  • Highly Scalable Blog
    Data Mining Problems in Retail - Retail is one of the most important business domains for data science and data mining applications because of its prolific data and numerous optimization...
    10 years ago
  • Pursuits in the Null Space
    Blog done moved - At first I was hesitant because of its handling of latex, but Rolf of Mathcination pointed out this excellent tool: latex-to-wordpress. So, from now on I w...
    10 years ago
  • Tianyi Zhou's Research Blog » Learn Low-rank & Sparse Structures via Randomized Alternating Projections
    List of Submodular Optimization on Streaming Data (In Update) - Coresets for k-Segmentation of Streaming Data, NIPS 2014 Streaming Submodular Optimization: Massive Data Summarization on the Fly, KDD 2014
    10 years ago
  • Wondering Star
    Lunar Detection Of Ultra-High-Energy Cosmic Rays And Neutrinos - Spotted on the ArXiv Physics blog, what about using the SKA array and the Moon as a collector, that would certainly qualify as sensors the size of a pl...
    10 years ago
  • Thoughts on Artificial Intelligence
    - This blog has moved. There is a new post on metaphor and mathematics on the new blog.
    10 years ago
  • Ivan Oseledets homepage
    Introduction into cross approximation - I have written a very short and incomplete introductory page into skeleton decomposition and using the maxvol algorithm. I will update it with references a...
    11 years ago
  • Normal Deviate
    THE END - In addition to being the best comedy TV show ever, Seinfeld was a great source of wisdom. In one episode, Jerry counsels George: “When you hit that high no...
    11 years ago
  • The Proof is in the Pudding
    NAS Frontiers in Massive Data Analysis - The National Academies Press has released the Frontiers in Massive Data Analysis report that the Committee on the Analysis of Massive Data prepared over th...
    11 years ago
  • Mathblogging.org -- the Blog
    Mathematical Instruments: Gianluigi Filippelli - This post is part of the series Mathematical Instruments in which we introduce you to some of the math bloggers listed on our site. Today: Gianluigi Filipp...
    12 years ago
  • openPicus blog
    Better BBQ with Flyport and Cosm - I knew Austria was famous for BBQ. I remember the summer of 2006 when I had a great barbeque with some austrian friends in Klagenfurt. It was fantastic, 8 ...
    12 years ago
  • Biophotonics Review
    Biology Games for Biomedical Research - Games have become important part of the cultures, bringing lots of entertainment and creativity to the societies. Beyond the traditional understanding ...
    12 years ago
  • Where are the Clouds ?
    Fukushima: 9 Months later - With regards to airborne radiation and ground measurement, here is the reading at the NaI detector at KEK in Tsukuba showing a steady decline over the past...
    13 years ago
  • My programming and machine learning blog
    The nasty bug crawling in my Orthogonal Matching Pursuit code - A while back, Bob L. Sturm blogged about a similar implementation of OMP to the one in scikit-learn. Instead of using the Cholesky decomposition like we di...
    13 years ago
  • Gigapixel News Journal
    ipConfigure Presents The First Gigapixel Wide Area Surveillance Platform - ipConfigure, a privately owned software research and development company, announces the world’s first multi-Gigapixel surveillance platform designed for ...
    13 years ago
  • Gustavo Tinkers
    Source Code for GUI - Hosted here: https://github.com/goretkin/soundcard-radar
    13 years ago
  • Andrej Karpathy: Blog
    -
  • Artificial Intelligence Blog
    -
  • Math... What Is It Good For? | Just another WordPress site
    -
  • Lupi on Software
    -
  • Thingiverse Blog
    -
Show 25 Show All

Search Pages I Link To

Previous Entries

  • ►  2024 (1)
    • ►  Dec 2024 (1)
  • ►  2023 (1)
    • ►  Aug 2023 (1)
  • ►  2021 (9)
    • ►  Dec 2021 (2)
    • ►  May 2021 (1)
    • ►  Apr 2021 (3)
    • ►  Mar 2021 (3)
  • ►  2020 (13)
    • ►  Dec 2020 (4)
    • ►  Oct 2020 (2)
    • ►  May 2020 (2)
    • ►  Apr 2020 (2)
    • ►  Mar 2020 (2)
    • ►  Jan 2020 (1)
  • ►  2019 (111)
    • ►  Dec 2019 (2)
    • ►  Nov 2019 (3)
    • ►  Oct 2019 (2)
    • ►  Sep 2019 (1)
    • ►  Aug 2019 (8)
    • ►  Jul 2019 (8)
    • ►  Jun 2019 (30)
    • ►  May 2019 (35)
    • ►  Apr 2019 (20)
    • ►  Mar 2019 (1)
    • ►  Feb 2019 (1)
  • ►  2018 (94)
    • ►  Dec 2018 (1)
    • ►  Nov 2018 (1)
    • ►  Oct 2018 (4)
    • ►  Sep 2018 (4)
    • ►  Aug 2018 (1)
    • ►  Jul 2018 (6)
    • ►  Jun 2018 (8)
    • ►  May 2018 (6)
    • ►  Apr 2018 (16)
    • ►  Mar 2018 (12)
    • ►  Feb 2018 (11)
    • ►  Jan 2018 (24)
  • ►  2017 (294)
    • ►  Dec 2017 (17)
    • ►  Nov 2017 (16)
    • ►  Oct 2017 (12)
    • ►  Sep 2017 (17)
    • ►  Aug 2017 (25)
    • ►  Jul 2017 (26)
    • ►  Jun 2017 (29)
    • ►  May 2017 (30)
    • ►  Apr 2017 (25)
    • ►  Mar 2017 (39)
    • ►  Feb 2017 (25)
    • ►  Jan 2017 (33)
  • ▼  2016 (435)
    • ►  Dec 2016 (31)
    • ►  Nov 2016 (31)
    • ▼  Oct 2016 (27)
      • Lensless Imaging with Compressive Ultrafast Sensing
      • Practical Learning of Deep Gaussian Processes via ...
      • Thesis: Spectral Inference Methods on Sparse Graph...
      • A Greedy Blind Calibration Method for Compressed S...
      • Sunday Morning Insight: We're the Barbarians
      • Job: PhD Studentships, TU Delft / Online Optimizat...
      • Paris Machine Learning, Hors série #3: Mathématiqu...
      • Random Projections for Scaling Machine Learning in...
      • Thesis: Fast Randomized Algorithms for Convex Opti...
      • Sketching Meets Random Projection in the Dual: A P...
      • Job: Postdoc at Ecole Normale Supérieure (ENS Pari...
      • Sunday Morning Insight: Machine Learning in Paris ...
      • MPI-FAUN: An MPI-Based Framework for Alternating-U...
      • Hybrid computing using a neural network with dynam...
      • Paris Machine Learning Meetup #2 Season 4: Emotion...
      • Gaussian graphical models with skggm - implementat...
      • Jobs: Two Postdocs, Statistical Mechanics of Learn...
      • HyperNetworks - implementation -
      • Input Convex Neural Networks - implementation -
      • Approximate Sparse Linear Regression
      • Course: Stat212b: Topics Course on Deep Learning b...
      • The Famine of Forte: Few Search Problems Greatly F...
      • Universal microbial diagnostics using random DNA p...
      • Thesis: Faster algorithms for convex and combinato...
      • Fastfood Dictionary Learning for Periocular-Based ...
      • CSjob redux: Postdoc, Signal processing & inverse ...
      • Nuit Blanche in Review (September 2016)
    • ►  Sep 2016 (39)
    • ►  Aug 2016 (36)
    • ►  Jul 2016 (33)
    • ►  Jun 2016 (41)
    • ►  May 2016 (60)
    • ►  Apr 2016 (32)
    • ►  Mar 2016 (35)
    • ►  Feb 2016 (38)
    • ►  Jan 2016 (32)
  • ►  2015 (497)
    • ►  Dec 2015 (38)
    • ►  Nov 2015 (34)
    • ►  Oct 2015 (49)
    • ►  Sep 2015 (41)
    • ►  Aug 2015 (41)
    • ►  Jul 2015 (34)
    • ►  Jun 2015 (48)
    • ►  May 2015 (42)
    • ►  Apr 2015 (45)
    • ►  Mar 2015 (44)
    • ►  Feb 2015 (31)
    • ►  Jan 2015 (50)
  • ►  2014 (536)
    • ►  Dec 2014 (52)
    • ►  Nov 2014 (43)
    • ►  Oct 2014 (38)
    • ►  Sep 2014 (41)
    • ►  Aug 2014 (48)
    • ►  Jul 2014 (52)
    • ►  Jun 2014 (43)
    • ►  May 2014 (56)
    • ►  Apr 2014 (47)
    • ►  Mar 2014 (44)
    • ►  Feb 2014 (35)
    • ►  Jan 2014 (37)
  • ►  2013 (454)
    • ►  Dec 2013 (43)
    • ►  Nov 2013 (38)
    • ►  Oct 2013 (38)
    • ►  Sep 2013 (33)
    • ►  Aug 2013 (36)
    • ►  Jul 2013 (43)
    • ►  Jun 2013 (29)
    • ►  May 2013 (38)
    • ►  Apr 2013 (40)
    • ►  Mar 2013 (29)
    • ►  Feb 2013 (47)
    • ►  Jan 2013 (40)
  • ►  2012 (488)
    • ►  Dec 2012 (44)
    • ►  Nov 2012 (39)
    • ►  Oct 2012 (46)
    • ►  Sep 2012 (28)
    • ►  Aug 2012 (52)
    • ►  Jul 2012 (20)
    • ►  Jun 2012 (38)
    • ►  May 2012 (60)
    • ►  Apr 2012 (41)
    • ►  Mar 2012 (50)
    • ►  Feb 2012 (29)
    • ►  Jan 2012 (41)
  • ►  2011 (465)
    • ►  Dec 2011 (47)
    • ►  Nov 2011 (49)
    • ►  Oct 2011 (47)
    • ►  Sep 2011 (36)
    • ►  Aug 2011 (24)
    • ►  Jul 2011 (25)
    • ►  Jun 2011 (47)
    • ►  May 2011 (50)
    • ►  Apr 2011 (56)
    • ►  Mar 2011 (39)
    • ►  Feb 2011 (17)
    • ►  Jan 2011 (28)
  • ►  2010 (357)
    • ►  Dec 2010 (47)
    • ►  Nov 2010 (35)
    • ►  Oct 2010 (32)
    • ►  Sep 2010 (28)
    • ►  Aug 2010 (30)
    • ►  Jul 2010 (33)
    • ►  Jun 2010 (26)
    • ►  May 2010 (27)
    • ►  Apr 2010 (28)
    • ►  Mar 2010 (28)
    • ►  Feb 2010 (19)
    • ►  Jan 2010 (24)
  • ►  2009 (274)
    • ►  Dec 2009 (22)
    • ►  Nov 2009 (23)
    • ►  Oct 2009 (24)
    • ►  Sep 2009 (25)
    • ►  Aug 2009 (25)
    • ►  Jul 2009 (23)
    • ►  Jun 2009 (20)
    • ►  May 2009 (16)
    • ►  Apr 2009 (25)
    • ►  Mar 2009 (27)
    • ►  Feb 2009 (21)
    • ►  Jan 2009 (23)
  • ►  2008 (302)
    • ►  Dec 2008 (20)
    • ►  Nov 2008 (23)
    • ►  Oct 2008 (28)
    • ►  Sep 2008 (28)
    • ►  Aug 2008 (22)
    • ►  Jul 2008 (17)
    • ►  Jun 2008 (28)
    • ►  May 2008 (22)
    • ►  Apr 2008 (31)
    • ►  Mar 2008 (32)
    • ►  Feb 2008 (25)
    • ►  Jan 2008 (26)
  • ►  2007 (180)
    • ►  Dec 2007 (23)
    • ►  Nov 2007 (21)
    • ►  Oct 2007 (15)
    • ►  Sep 2007 (18)
    • ►  Aug 2007 (13)
    • ►  Jul 2007 (13)
    • ►  Jun 2007 (9)
    • ►  May 2007 (11)
    • ►  Apr 2007 (9)
    • ►  Mar 2007 (22)
    • ►  Feb 2007 (19)
    • ►  Jan 2007 (7)
  • ►  2006 (30)
    • ►  Dec 2006 (4)
    • ►  Nov 2006 (4)
    • ►  Oct 2006 (2)
    • ►  Sep 2006 (2)
    • ►  Aug 2006 (2)
    • ►  Jul 2006 (2)
    • ►  Jun 2006 (1)
    • ►  Mar 2006 (2)
    • ►  Feb 2006 (2)
    • ►  Jan 2006 (9)
  • ►  2005 (88)
    • ►  Dec 2005 (1)
    • ►  Nov 2005 (6)
    • ►  Oct 2005 (3)
    • ►  Sep 2005 (12)
    • ►  Aug 2005 (1)
    • ►  Jul 2005 (7)
    • ►  Jun 2005 (4)
    • ►  May 2005 (12)
    • ►  Apr 2005 (7)
    • ►  Mar 2005 (12)
    • ►  Feb 2005 (8)
    • ►  Jan 2005 (15)
  • ►  2004 (214)
    • ►  Dec 2004 (18)
    • ►  Nov 2004 (8)
    • ►  Oct 2004 (20)
    • ►  Sep 2004 (44)
    • ►  Aug 2004 (29)
    • ►  Jul 2004 (13)
    • ►  Jun 2004 (13)
    • ►  May 2004 (18)
    • ►  Apr 2004 (10)
    • ►  Mar 2004 (22)
    • ►  Feb 2004 (8)
    • ►  Jan 2004 (11)
  • ►  2003 (12)
    • ►  Dec 2003 (10)
    • ►  Nov 2003 (2)

Books Wish List

My Amazon.com Wish List

Focused Interest

  • Compressed Sensing / Compressive Sampling / Compressive Sensing
  • Mapping all blog entries on Compressed Sensing
  • Cognition - Machine Learning
  • Space
  • Search and Rescue
  • Compressive Sensing Technology Watch
  • Compressive Sensing: The Big Picture
  • Compressive Sensing Hardware
  • Compressed Sensing Videos
  • Compressive Sensing Calendar
  • Compressive Sensing Jobs
  • Local Compressed Sensing Codes
  • CS LinkedIn Group
  • Recent links on the Blog in CS
  • Compressive Sensing 2.0 Community
  • Compressive Sensing 2.0: blogs and webpages
  • Saturday Morning Cartoons
  • Sherpa/Romeo, Publisher copyright policies & self-archiving

Categories/Subjects of Interest

  • CS (2631)
  • compressive sensing (1627)
  • compressed sensing (1615)
  • compressive sampling (1589)
  • ML (1113)
  • MF (796)
  • implementation (537)
  • Applied Math (209)
  • SaturdayMorningVideos (195)
  • MatrixFactorization (194)
  • space (165)
  • CSHardware (151)
  • CSjobs (137)
  • AMP (125)
  • random projections (122)
  • calibration (109)
  • RandNLA (101)
  • RandomFeatures (98)
  • MLParis (96)
  • MLVideos (92)
  • nonlinearCS (87)
  • meetup (82)
  • phaseretrieval (80)
  • ParisMachineLearning (79)
  • sketching (79)
  • SundayMorningInsight (76)
  • BlindDeconvolution (74)
  • CS Community (72)
  • QuantCS (71)
  • hyperspectral (71)
  • tensor (70)
  • NuitBlancheReview (69)
  • TheGreatConvergence (69)
  • CSCommunity (66)
  • CSjob (62)
  • CSVideo (61)
  • python (61)
  • phasediagrams (60)
  • nuclear (59)
  • technology (59)
  • grouptesting (55)
  • MLHardware (54)
  • Meetups (54)
  • thesis (54)
  • MichaelMahoney (51)
  • 1bit (50)
  • synbio (49)
  • cognition (48)
  • CSmeeting (47)
  • publishing (47)
  • Algorithm (45)
  • graphlab (38)
  • Csstats (37)
  • LIghtOn (37)
  • darpa (37)
  • AI (35)
  • mapmaker (33)
  • search and rescue (33)
  • weather modeling (33)
  • hash (32)
  • remote sensing (32)
  • wow (31)
  • bayes (29)
  • business (29)
  • phaserecovery (29)
  • MappingMLtoHardware (28)
  • data fusion (28)
  • machine learning (28)
  • A2I (27)
  • hashing (27)
  • streaming (27)
  • EdoLiberty (25)
  • jim gray (24)
  • AlexSmola (23)
  • ICLR (23)
  • Overviews (23)
  • nanopore (23)
  • neuroscience (23)
  • autonomous (22)
  • dimensionality reduction (22)
  • Frank-Wolfe (19)
  • Kaczmarz (19)
  • MetaLearning (19)
  • geocam (19)
  • space debris (19)
  • space situational awareness (19)
  • NIPS (18)
  • medical (18)
  • randomization (18)
  • ChristophStuder (17)
  • ImagingWithNature (17)
  • ManifoldSignalProcessing (17)
  • RandomForest (17)
  • SAHD (17)
  • cosparsity (17)
  • maps (17)
  • mishap (17)
  • sleep (17)
  • CAI (16)
  • CSCalendar (16)
  • LenkaZdeborova (16)
  • MMDS (16)
  • NMF (16)
  • These Technologies Do Not Exist (16)
  • monday morning algorithm (16)
  • quantum (16)
  • transport (16)
  • aroundtheblogs (15)
  • autoML (15)
  • energy (15)
  • PierreVandergheynst (14)
  • videos (14)
  • AnomalyDetection (13)
  • france (13)
  • hasp (13)
  • life (13)
  • superresolution (13)
  • ADMM (12)
  • CfP (12)
  • PatrickGill (12)
  • PredictingTheFuture (12)
  • causality (12)
  • darpa urban challenge (12)
  • icml2015 (12)
  • sudoku (12)
  • CSDiscussion (11)
  • HammingsTime (11)
  • ICLR2015 (11)
  • LearningToLearn (11)
  • TRL (11)
  • book (11)
  • qa (11)
  • thermal engineering (11)
  • GPU (10)
  • VowpalWabbit (10)
  • challenge (10)
  • fft (10)
  • sie (10)
  • Computational Neuroscience (9)
  • MultiplicativeNoise (9)
  • StarTracker (9)
  • random lens imaging (9)
  • ELM (8)
  • EarthMovers (8)
  • GenomeTV (8)
  • What Is It Good For ? (8)
  • collaborative task manager (8)
  • exploration (8)
  • manopt (8)
  • situational awareness (8)
  • sparsity (8)
  • wavelet (8)
  • GreatThoughtsFriday (7)
  • collaborative work (7)
  • complexity vizualisation (7)
  • innovation (7)
  • julia (7)
  • maxent (7)
  • mems (7)
  • CSCartoons (6)
  • CitingNuitBlanche (6)
  • CompressibleWGN (6)
  • UQ (6)
  • accidentalcamera (6)
  • coded aperture (6)
  • startups (6)
  • thedip (6)
  • transit (6)
  • DataDrivenSensorDesign (5)
  • HighlyTechnicalReferencePage (5)
  • ICML (5)
  • LightOnAIMeetup (5)
  • RMM (5)
  • genomics (5)
  • muscle (5)
  • tex-mems (5)
  • BP (4)
  • British Petroleum (4)
  • CompressiveSensingWhatIsItGoodFor (4)
  • HusHambug (4)
  • No Comment (4)
  • ReproducibleResearch (4)
  • aggregators (4)
  • dataset (4)
  • disruptive technology (4)
  • google maps (4)
  • hypergeocam (4)
  • internet traffic (4)
  • jionc (4)
  • microsystems (4)
  • nips2015 (4)
  • scaling (4)
  • sensor network (4)
  • technologie (4)
  • COLT (3)
  • Deepwater Horizon (3)
  • financement de la recherche (3)
  • google (3)
  • meeting (3)
  • radiation detection (3)
  • recherche (3)
  • sketch (3)
  • AWGN (2)
  • Columbia (2)
  • CompanyX (2)
  • DC law (2)
  • LLM (2)
  • LowRank (2)
  • MLCourse (2)
  • MLZurich (2)
  • NO-C-WE (2)
  • QIS (2)
  • R (2)
  • SKA (2)
  • TheNuitBlancheChronicles (2)
  • UAV (2)
  • anecdote (2)
  • diet (2)
  • iot (2)
  • kinect hacks (2)
  • microcontroller (2)
  • nonlinCS (2)
  • notebynotecooking (2)
  • theano (2)
  • BaltiAndBioinformatics (1)
  • Blogger (1)
  • CS; MF (1)
  • CT (1)
  • GAN (1)
  • ICLR2019 (1)
  • ICML2016 (1)
  • JOTRSOI (1)
  • Keras (1)
  • LTE (1)
  • Leonardo (1)
  • MF tensor (1)
  • MagicWeek (1)
  • Newsletter (1)
  • PyTorch (1)
  • RMT (1)
  • SaturdayMorningCartoons (1)
  • SensorsTheSizeOfAPlanet (1)
  • SundayMorningScienceVideos (1)
  • TensorFlow (1)
  • YouAreNotPayingAttention (1)
  • advice (1)
  • aha (1)
  • art (1)
  • biographies (1)
  • complex (1)
  • control (1)
  • crowdfunding (1)
  • csoped (1)
  • donoho-tao (1)
  • extremesampling (1)
  • fairness (1)
  • herschel (1)
  • hushamburg (1)
  • impl (1)
  • invariant (1)
  • inverse problems (1)
  • itwist (1)
  • jacques devooght (1)
  • lfe (1)
  • lua (1)
  • memory (1)
  • mindmaps (1)
  • nanopre (1)
  • octopus (1)
  • oped (1)
  • privacy (1)
  • reference (1)
  • request (1)
  • rr (1)
  • seinfeld (1)
  • solver (1)
  • wonderingstar (1)
  • youkeepusingthatword (1)

Other sites of interest / Blogroll

  • The Natural Language Blog of Hal Daume III
  • The Polylog Blog of Andrew McGregor
  • Eric Tramel's Espace Vide Blog on Compressed Sensing
  • Lianlin Li's Compressive Sensing blog (in Chinese)
  • Space Engineering Research Center
  • Space Engineering Blog
  • Frank Nielsen's Information Geometry blog
  • David Brady's Blog
  • Le Petit Chercheur Illustre
  • Chaotic Pearls (in Indonesian)
  • De Rerum Natura
  • Michele Guieu's blog
  • An Ergodic Walk
  • Laurent Duval's site
  • Laurent Duval's blog
  • Thesilog
  • Diffusion des savoirs - Ecole Normale Superieure
  • What's New, Terry Tao
  • Statistical Modeling, Causal Inference, and Social Science, Andrew Gelman, Aleks Jakulin, Masanao Yajima
  • Machine Learning etc, Yaroslav Bulatov
  • My slice of Pizza, Muthu Mutukrishnan
  • The Geomblog by Piotr Indyk and Suresh
  • Machine Learning (Theory), John Langford
  • Lemonodor by John Wiseman
  • Yet another Machine Learning blog, Pierre Dangauthier
  • Make Magazine blog
  • Theses en ligne
  • Neurevolution blog
  • Pedro Davalos website
  • Damaris' blog
  • Olivier's blog
  • Julie's blog
  • Michele Guieu's site

Powered by Blogger.