Scalable Lifelong Reinforcement Learning

Date:

Jul 31, 2017

Authors:

Yusen Zhan, Haitham Bou-Ammar, Matthew E. Taylor

Yusen Zhan, Haitham Bou-Ammar, Matthew E. Taylor

Lifelong reinforcement learning provides a successful framework for agents to learn multiple consecutive tasks sequentially. Current methods, however, suffer from scalability issues when the agent has to solve a large number of tasks. In this paper, we remedy the above drawbacks and propose a novel scalable technique for lifelong reinforcement learning. We derive an algorithm which assumes the availability of multiple processing units and computes shared repositories and local policies using only local information exchange. We then show an improvement to reach a linear convergence rate compared to current lifelong policy search methods. Finally, we evaluate our technique on a set of benchmark dynamical systems and demonstrate learning speed-ups and reduced running times.

View paper

Share on social media