Alex Dimakis who sometimes guest blog on The Ergodic walk reminded me of a discussion we had a while back:
Hi Igor,
I thought you might find this
http://ergodicity.net/2010/06/23/explicit-constructions-of- expanders-and-ldpc-codes/
interesting-- related to a conversation we had a while ago about checking RIP and expansion.
I have already featured this entry before but it is worth re-reading. Other blogs featured the following entries of interest:
Suresh
Hal Daume III
Alex
Andrew Gelman
David Brady
who talks about Ramesh Raskar's new project NETRA. I'll come back to this later.
In other news:
The technical report of Ewout van der Berg and Michael Friedlander entitled Sparse optimization with least-squares constraints is out.
and on Arxiv, we saw the following three papers:
GraphLab: A New Framework for Parallel Machine Learning by Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, Joseph M. Hellerstein. The abstract reads:
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.
Compressive Direction Finding Based on Amplitude Comparison by Ruiming Yang, Yipeng Liu, Qun Wan, Wanlin Yang. The abstract reads:
This paper exploits recent developments in compressive sensing (CS) to efficiently perform the direction finding via amplitude comprarison. The new method is proposed based on unimodal characteristic of antenna pattern and sparse property of received data. Unlike the conventional methods based peak-searching and symmetric constraint, the sparse reconstruction algorithm requires less pulse and takes advantage of CS. Simulation results validate the performance of the proposed method is better than the conventional methods.
Fundamental Tradeoffs for Sparsity Pattern Recovery by Galen Reeves and Michael Gastpar. The abstract reads:
Recovery of the sparsity pattern (or support) of a sparse vector from a small number of noisy linear samples is a common problem that arises in signal processing and statistics. In the high dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the sampling rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases a computationally simple thresholding estimator is near-optimal. Upper bounds on the sampling rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for two different estimators. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.
Finally, there is a postdoc position in France:
Open Postdoc Positions in Bandits and Reinforcement Learning at INRIA Lille
The project team SEQUEL (Sequential Learning) of INRIA Lille, France, sequel.lille.inria.fr/ is seeking to appoint several Postdoctoral Fellows. We welcome applicants with a strong mathematical background who are interested in theory and applications of reinforcement learning and bandit algorithms.
The research will be conducted under the supervision of Remi Munos, Mohammad Ghavamzadeh and/or Daniil Ryabko, depending on the chosen topics.
The positions are research only and are for one year, with possibility of being extended.
The starting date is flexible, from the Fall 2010 to Spring 2011.
INRIA is France’s leading institution in Computer Science, with over 2800 scientists employed, of which around 250 in Lille. Lille is the capital of the north of France, a metropolis with 1 million inhabitants, with excellent train connection to Brussels (30 min), Paris (1h) and London (1h30).
The Sequel lab is a dynamic lab at INRIA with over 25 researchers (including PhD students) which covers several aspects of machine learning from theory to applications, including statistical learning, reinforcement learning, and sequential learning.
The positions will be funded by the EXPLO-RA project (Exploration-Exploitation for efficient Resource Allocation), a project in collaboration with ENS Ulm (Gilles Stoltz), Ecole des Ponts (Jean Yves Audibert), INRIA team TAO (Olivier Teytaud), Univ. Paris Descartes (Bruno Bouzy), and Univ. Paris Dauphine (Tristan Cazenave).
See: sites.google.com/site/anrexplora/ for some of our activities.
Possible topics include:
* In Reinforcement learning: RL in high dimensions. Sparse representations, use of random projections in RL.
* In Bandits: Bandit algorithms in complex environments. Contextual bandits, Bandits with dependent arms, Infinitely many arms bandits. Links between the bandit and other learning problems.
* In hierarchical bandits / Monte-Carlo Tree Search: Analysis and developement of MCTS / hierarchical bandit algorithms, planning with MCTS for solving MDPs
* In Statistical learning: Compressed learning, use of random projections, link with compressed sensing.
* In sequential learning: Sequential prediction of time series
Candidates must have a Ph.D. degree (by the starting date of the position) in machine learning, statistics, or related fields, possibily with background in reinforcement learning, bandits, or optimization.
To apply please send a CV and a proposition of research topic to remi.munos@inria.fr or mohammad.ghavamzadeh@inria.fr, or daniil.ryabko@inria.fr.
If you are planning to go to ICML / COLT this year, we could set up an appointment there.
Read more: http://scholarshipdb.com/postdoc-positions-bandits-reinforcement-learning-inria-lille.html#ixzz0sEEAaILi
credit: The Aurora Australis, As Seen from the ISS on May 29 NASA
If you are interested in discussing GraphLab further, I was asking about it on MetaOptimize.
ReplyDelete