Pages

Friday, March 31, 2017

Scaling the Scattering Transform: Deep Hybrid Networks - implementation - / The Shattered Gradients Problem: If resnets are the answer, then what is the question?

Two items this morning, one about wavelets helping out faster Machine Learning models (a little bit like the spatial transformers bit, see gvnn: Neural Network Library for Geometric Computer Vision - implementation -gvnn: Neural Network Library for Geometric Computer Vision) and examining why Resnets do well. Enjoy !

David makes a reference to the wavelet comeback: Scaling the Scattering Transform: Deep Hybrid Networks by Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko

We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs. Using a shallow cascade of 1x1 convolutions, which encodes scattering coefficients that correspond to spatial windows of very small sizes, permits to obtain AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local encoding explicitly learns in-variance w.r.t. rotations. Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers. We also find that hybrid architectures can yield excellent performance in the small sample regime, exceeding their end-to-end counterparts, through their ability to incorporate geometrical priors. We demonstrate this on subsets of the CIFAR-10 dataset and by setting a new state-of-the-art on the STL-10 dataset.

The implementation of the Fast Scattering Transform with CuPy/PyTorch: https://github.com/edouardoyallon/pyscatwave

and the experiments run in the paper are here: https://github.com/edouardoyallon/scalingscattering 

Olivier mentioned it this morning on his twitter feed: The Shattered Gradients Problem: If resnets are the answer, then what is the question? by David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, Brian McWilliams

A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. The problem has largely been overcome through the introduction of carefully constructed initializations and batch normalization. Nevertheless, architectures incorporating skip-connections such as resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise. In contrast, the gradients in architectures with skip-connections are far more resistant to shattering decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new "looks linear" (LL) initialization that prevents shattering. Preliminary experiments show the new initialization allows to train very deep networks without the addition of skip-connections


 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, March 29, 2017

Random Features for Compositional Kernels

Random features used in a hierarchical manner, I like it !


We describe and analyze a simple random feature scheme (RFS) from prescribed compositional kernels. The compositional kernels we use are inspired by the structure of convolutional neural networks and kernels. The resulting scheme yields sparse and efficiently computable features. Each random feature can be represented as an algebraic expression over a small number of (random) paths in a composition tree. Thus, compositional random features can be stored compactly. The discrete nature of the generation process enables de-duplication of repeated features, further compacting the representation and increasing the diversity of the embeddings. Our approach complements and can be combined with previous random feature schemes.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Evolution Strategies as a Scalable Alternative to Reinforcement Learning - implementation -

We already mentioned it earlier with a different implementation. Since OpenAI is doing the work of making it available by explaining it through a blog entry and a video (see below) and a Github repository ( https://github.com/openai/evolution-strategies-starter ), I am featuring it again.




Evolution Strategies as a Scalable Alternative to Reinforcement Learning by Tim Salimans, Jonathan Ho, Xi Chen, Ilya Sutskever

We explore the use of Evolution Strategies, a class of black box optimization algorithms, as an alternative to popular RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using hundreds to thousands of parallel workers, ES can solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training time. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, March 28, 2017

Preconditioned Data Sparsification for Big Data with Applications to PCA and K-means - implementation -



Stephen just sent me the following:

Hi Igor,
Hope all is going well and that springtime is coming to Paris.
My student Farhad Pourkamali-Anaraki and I have a recent paper that may be interesting for the blog. It was just published in the IEEE Transactions Information Theory journal (https://doi.org/10.1109/TIT.2017.2672725) but we also have a free version at https://arxiv.org/abs/1511.00152, as well as open source code at https://github.com/stephenbeckr/SparsifiedKMeans. We discuss several applications in the paper, and we think that applying our method to K-means leads to one of the fastest big-data K-means algorithms.
Best,
Stephen

Yes, Stephen, springtime is coming to Paris ! Thanks for the heads-up.


We analyze a compression scheme for large data sets that randomly keeps a small percentage of the components of each data sample. The benefit is that the output is a sparse matrix and therefore subsequent processing, such as PCA or K-means, is significantly faster, especially in a distributed-data setting. Furthermore, the sampling is single-pass and applicable to streaming data. The sampling mechanism is a variant of previous methods proposed in the literature combined with a randomized preconditioning to smooth the data. We provide guarantees for PCA in terms of the covariance matrix, and guarantees for K-means in terms of the error in the center estimators at a given step. We present numerical evidence to show both that our bounds are nearly tight and that our algorithms provide a real benefit when applied to standard test data sets, as well as providing certain benefits over related sampling approaches.

The implementation is here: https://github.com/stephenbeckr/SparsifiedKMeans

CSjob: Three Postdocs, SAMP-Lab, Technion

[FYI, all jobs posted here on Nuit Blanche can be found under the CSjob tag.]

Liat just sent me the following:


Dear Igor
I was wondering if you could post some post doc positions at your great blog:
 
Postdoc in Signal Processing of Medical Imaging
 
Host Professor: Yonina Eldar
Position Description: The Signal Acquisition Measurement and Processing Lab at the Technion invites applications for a postdoctoral position with focus on Medical imaging. We are looking for thought leaders in the fields of MRI, CT Ultrasound and PET who can develop new areas and applications in Medical imaging systems. Candidate is expected to have a good background in advanced signal processing, image processing, computational mathematics and statistics. The applicant should be very comfortable with physical concepts of at least one medical imaging modality. The balance of work between theory and practice will vary on project-basis, and a successful candidate should be proficient in both aspects. Candidate should have an excellent publication record in leading medical imaging or image processing journals, ability to lead and interface with large teams comprising of theoreticians and hardware engineers is a plus. Excellent written and presentation skills in English are an advantage.
 
About SAMPL: The Lab focuses on sampling, modeling and processing of continuous-time and discrete-time signals and on new design paradigms in which sampling and processing are designed jointly in order to exploit signal properties already in the sampling stage. This approach has the potential to drastically reduce the sampling and processing rates well below the Nyquist rate, typically considered as the ultimate limit for analog to digital conversion. The laboratory facilitates the transition from pure theoretical research to the development, design and implementation of prototype systems in areas ranging from bioimaging through communications, laser optics, cognitive radio and radar systems. Research projects are available in the areas of phase retrieval, superresolution imaging, radar systems, medical imaging, deep networks, sampling over graphs, blind deconvolution, and communication systems. The group works closely with several major hospitals in Israel.
 
Contact Details:
For more details please visit Prof. Eldar’s website.
To submit your application, please send an updated CV with a list of publications, 3 letters of recommendation and a cover letter to yonina@ee.technion.ac.il
 
Postdoc in Radar Signal Processing
 
Host Professor: Yonina Eldar
Position Description: The Signal Acquisition Measurement and Processing Lab at the Technion invites applications for one postdoctoral position with focus on radar system design and radar signal processing. We are looking for thought leaders in radar who can develop new areas and applications in radar and remote sensing. Candidate is expected to have a good background in advanced signal processing, electromagnetic theory, computational mathematics and statistics. The applicant should be very comfortable with radar hardware design. The balance of work between theory and hardware will vary on project-basis, and a successful candidate should be proficient in both aspects. Candidate should have an excellent publication record in leading radar journals, ability to lead and interface with large teams comprising of theoreticians and hardware engineers is a plus. Excellent written and presentation skills in English are an advantage.
 
About SAMPL: The Lab focuses on sampling, modeling and processing of continuous-time and discrete-time signals and on new design paradigms in which sampling and processing are designed jointly in order to exploit signal properties already in the sampling stage. This approach has the potential to drastically reduce the sampling and processing rates well below the Nyquist rate, typically considered as the ultimate limit for analog to digital conversion. The laboratory facilitates the transition from pure theoretical research to the development, design and implementation of prototype systems in areas ranging from bio imaging through communications, laser optics, cognitive radio and radar systems. Research projects are available in the areas of phase retrieval, super resolution imaging, radar systems, ultrasound and MRI imaging, sampling over graphs, blind deconvolution, and communication systems. 
 
Contact Details:
For more details please visit Prof. Eldar’s website.
To submit your application, please send an updated CV with a list of publications, 3 letters of recommendation and a cover letter to yonina@ee.technion.ac.il
 
 
 
Postdoc in Signal Processing on Graphs
 
Host Professor: Yonina Eldar
Position Description: The Signal Acquisition Measurement and Processing Lab at the Technicon welcomes applications for a postdoctoral position with focus on signal processing on graphs. The field of signal processing on graphs is a new field of study, combining traditional signal processing with signals defined over a graph. Applicants are expected to have a good background in graph theory. Excellent written and presentation skills in English are an advantage.
 
About SAMPL: The Lab focuses on sampling, modeling and processing of continuous-time and discrete-time signals and on new design paradigms in which sampling and processing are designed jointly in order to exploit signal properties already in the sampling stage. This approach has the potential to drastically reduce the sampling and processing rates well below the Nyquist rate, typically considered as the ultimate limit for analog to digital conversion. The laboratory facilitates the transition from pure theoretical research to the development, design and implementation of prototype systems in areas ranging from bio imaging trough communications, laser optics, cognitive radio, radar systems and graph signal processing.
Research projects are available in the areas of phase retrieval, super resolution imaging, radar systems, medical imaging, deep networks, sampling over graphs, blind deconvolution, and communication systems. 
 
Contact Details:
For more details please visit Prof. Eldar’s website.
To submit your application, please send an updated CV with a list of publications, 3 letters of recommendation and a cover letter to yonina@ee.technion.ac.il
 
Thanks,
 
 
 
Regards,
 
Liat Eilat
Professor assistant - Research support





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, March 27, 2017

CSjobs: Postdocs, Signal and Image Processing Institute, University of Southern California

Justin just sent me the following:
Hi Igor, 
We're currently looking for postdocs with a strong background in computational imaging/compressive sensing to work on a major new funded project. Would you be willing to advertise this on your blog? A description of the position can be found below
Thanks, and much appreciated!
--
Justin Haldar
Assistant Professor
Signal and Image Processing Institute
Departments of Electrical Engineering
and Biomedical Engineering
University of Southern California

Sure Justin !

A press release for the project can be found at: ISI Selected to Participate in New IARPA (RAVEN) Program

The job description is given below:

Post-Doctoral Research Associate
Signal and Image Processing Institute
University of Southern California

Several Postdoctoral Research Associate positions are available immediately for an exciting new government funded project with the goal of 3D coherent x-ray imaging of silicon integrated circuits at less than 10nm resolution. Successful candidates will work as part of an interdisciplinary multi-institution team (including USC, Northwestern University, Argonne National Labs and the Paul Scherrer Institute), with a focus on system modeling, simulation, image analysis and computational image reconstruction from sparsely-sampled data. The position will involve algorithm development and analysis, software implementations, evaluation on experimental data, and preparation of research articles.

Required Qualifications: PhD in Electrical Engineering, Statistics, Computer Science, or Physics. Programming experience, preferably including Matlab, Python, C++. Experience and publications in at least one of the following areas:  computational imaging, 3D tomographic image reconstruction, inverse problems, low-dimensional signal representations (sparsity, low-rank, etc.), numerical optimization, diffractive optics, optical simulation, coherent diffraction imaging, phaseless imaging, and ptychography,

Successful applicants will join the Signal and Image Processing Institute in the Department of Electrical Engineering and work with a team of faculty including Richard Leahy, Anthony Levi, Justin Haldar and Mahdi Soltanolkotabi.

The University of Southern California strongly values diversity and is committed to equal opportunity in employment.  Women and men, and members of all racial and ethnic groups, are encouraged to apply.

Send applications to:
Richard M. Leahy, Ph.D.
Professor and Director
Signal and Image Processing Institute
3740 McClintock Ave, EEB400
University of Southern California
Los Angeles, CA 90089 2564
http://neuroimage.usc.edu
leahy@sipi.usc.edu

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Data Science Summer School, Ecole Polytechnique, France, August 28th- September 1st, 2017




Julie tells me of this event; 

The data science initiative of Ecole Polytechnique organizes a Data Science Summer School from the 28 of August to the 1st of September 2017. http://www.ds3-datascience-polytechnique.fr/ The primary focus of the event is to provide a series of courses and talks covering the latest advances in the field presented by leading experts of the area, with a special session on Data Science for Smart Grids, and several networking facilities. The event is targeted for MSc2 and PhD students, postdocs, academics, members of public institutions, and professionals. 
Courses:
  • Yoshua BENGIO deep learning
  • Pradeep RAVIKUMAR graphical models
  • Peter RICHTÁRIK optimization models
  • Csaba SZEPESVÁRI bandits
Talks: 
  • Cédric ARCHAMBEAU 
  • Olivier BOUSQUET 
  • Damien ERNST 
  • Laura GRIGORI
  • Sean MEYN 
  • Sebastian NOWOZIN 
  • Stuart RUSSELL 
Key Dates: 
  • Application deadline: Apr. 20, 2017. 
  • Notification of acceptance by May 7, 2017. 
  • Event: Monday, Aug. 28 - Friday, Sept. 1, 2017 





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSjob; Internship (Spring/Summer/Fall 2017), IFP Energies nouvelles, France

Laurent just sent me the following:

Dear Igor


I have a late internship proposal at IFP Energies nouvelles. I would be delighted if you could advertise it. The update page is:
http://www.laurent-duval.eu//lcd-2017-intern-sparse-regression-dim-reduction.html
and the pdf file is here:
http://www.laurent-duval.eu//Documents/IFPEN_2017_SUBJ_Robust-sparse-regression.pdf


A text (same as the webpage if you need html code): 
Sparse regression and dimension reduction for sensor measurements and data normalization

The instrumental context is that of multiple 1D data or measurements ym related to the the same phenomenon x, corrupted by random effects nm and a different scaling parameter am, due to uncontrolled sensor calibrations or measurement variability. The model is thus: 
ym(k) = am x(k) + nm(k) .

The aim of the internship is to robustly estimate scaling parameters am (with confidence bounds) in the presence of missing data or outliers for potentially small, real-life signals x with large amplitude variations. The estimation should be as automatized as possible, based on data properties and priors (e.g. sparsity, positivity), so as to be used by non-expert users. Signals under study are for instance: vibration, analytical chemistry or biological data. Of particular interest for this internship is the study and performance assessment of robust loss or penalty functions (around the l2,1-norm) such as the R1-PCA or low-rank decomposition.


Best


Sure Laurent !

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, March 25, 2017

Saturday Morning Video: #NIPS2016 Symposium, Recurrent Neural Networks and Other Machines that Learn Algorithms

From the page of the minisymposium



Program
Full session videos are available here: Session 1Session 2Session 3.
We provide individual videos and slides below. You can also watch this Playlist.
2:00 - 2:20 Jürgen Schmidhuber 
Introduction to Recurrent Neural Networks and Other Machines that Learn Algorithms 
Slides Video
2:20 - 2:40 Paul Werbos 
Deep Learning in Recurrent Networks: From Basics To New Data on the Brain 
Slides Video
2:40 - 3:00 Li Deng 
Three Cool Topics on RNN 
Slides Video
3:00 - 3:20 Risto Miikkulainen 
Scaling Up Deep Learning through Neuroevolution 
Slides Video
3:20 - 3:40 Jason Weston 
New Tasks and Architectures for Language Understanding and Dialogue with Memory 
Slides Video
3:40 - 4:00 Oriol Vinyals 
Recurrent Nets Frontiers 
Slides Unavailable Video
 
4:00 - 4:30 Coffee Break
 
4:30 - 4:50 Mike Mozer 
Neural Hawkes Process Memories 
Slides Video
4:50 - 5:10 Ilya Sutskever 
Meta Learning in the Universe 
Slides Video
5:10 - 5:30 Marcus Hutter 
Asymptotically fastest solver of all well-defined problems 
Slides Video
(unfortunately cannot come - J. Schmidhuber will stand in for him)
5:30 - 5:50 Nando de Freitas 
Learning to Learn, to Program, to Explore and to Seek Knowledge 
Slides Video
5:50 - 6:10 Alex Graves 
Differentiable Neural Computer 
Slides Video
 
6:30 - 7:30 Light dinner break/Posters
 
7:30 - 7:50 Nal Kalchbrenner 
Generative Modeling as Sequence Learning 
Slides Video
7:50 - 9:00 Panel Discussion 
Topic: The future of machines that learn algorithms
Panelists: Ilya Sutskever, Jürgen Schmidhuber, Li Deng, Paul Werbos, Risto Miikkulainen, Sepp Hochreiter 
Moderator: Alex Graves 
Video








Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, March 24, 2017

Around The Blogs In 78 Hours


Yes, this is an issue and the blogs are helping seeing through some of it:

Jort and his team have released Audioset

Thomas
Bob

Muthu
Laurent
Sanjeev

Mitya

Felix

Adrian

Ferenc

Francois 
Thibaut
Terry

Here is an 'old' blog entry from Dustin on some of Yves' work in compressed sensing
Dustin