Site Search  

Resources » Publication Details

Publication Details


Reference TypeConference Proceedings
Author(s)Peters, J.;Vijayakumar, S.;Schaal, S.
Year2003
TitleScaling reinforcement learning paradigms for motor learning
Journal/Conference/Book TitleProceedings of the 10th Joint Symposium on Neural Computation (JSNC 2003)
KeywordsReinforcement learning, neurodynamic programming, actorcritic methods, policy gradient methods, natural policy gradient
AbstractReinforcement learning offers a general framework to explain reward related learning in artificial and biological motor control. However, current reinforcement learning methods rarely scale to high dimensional movement systems and mainly operate in discrete, low dimensional domains like game-playing, artificial toy problems, etc. This drawback makes them unsuitable for application to human or bio-mimetic motor control. In this poster, we look at promising approaches that can potentially scale and suggest a novel formulation of the actor-critic algorithm which takes steps towards alleviating the current shortcomings. We argue that methods based on greedy policies are not likely to scale into high-dimensional domains as they are problematic when used with function approximation – a must when dealing with continuous domains. We adopt the path of direct policy gradient based policy improvements since they avoid the problems of unstabilizing dynamics encountered in traditional value iteration based updates. While regular policy gradient methods have demonstrated promising results in the domain of humanoid notor control, we demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. Based on this, it is proved that Kakade’s ‘average natural policy gradient’ is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges with probability one to the nearest local minimum in Riemannian space of the cost function. The algorithm outperforms nonnatural policy gradients by far in a cart-pole balancing evaluation, and offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems.
Place PublishedIrvine, CA, May 2003
Short TitleScaling reinforcement learning paradigms for motor learning
URL(s) http://www-clmc.usc.edu/publicatons/P/peters-JSNC2003.pdf

Designed by: Nerses Ohanyan & Jan Peters
Page last modified on June 20, 2013, at 07:00 PM