We already mentioned it earlier with a different implementation. Since OpenAI is doing the work of making it available by explaining it through a blog entry and a video (see below) and a Github repository ( https://github.com/openai/evolution-strategies-starter ), I am featuring it again.
Evolution Strategies as a Scalable Alternative to Reinforcement Learning by Tim Salimans, Jonathan Ho, Xi Chen, Ilya Sutskever
We explore the use of Evolution Strategies, a class of black box optimization algorithms, as an alternative to popular RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using hundreds to thousands of parallel workers, ES can solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training time. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.
An in-depth OpenAI blog entry on the subject by Andrej Karpathy is here. There is also a video of Ilya on stage at EmTech Digital.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.