Mixed-Reality Deep Reinforcement Learning for a Reach-to-grasp Task

Artificial Neural Networks and Machine Learning ICANN 2019, pages 611-623, doi:10.1007/978-3-030-30487-4_47 - Sep 2019.
Associated documents : ICANN_2019_MixedReality_RL.pdf [2.3Mo]   http://dx.doi.org/10.1007/978-3-030-30487-4_47
Deep Reinforcement Learning (DRL) has become successful across various robotic applications. However, DRL methods are not sample-efficient and require long learning times. We present an approach for online continuous deep reinforcement learning for a reach-to-grasp task in a mixed-reality environment: A human places targets for the robot in a physical environment; DRL for reaching these targets is carried out in simulation before actual actions are carried out in the physical environment. We extend previous work on a modified Deep Deterministic Policy Gradient (DDPG) algorithm with an architecture for online learning and evaluate different strategies to accelerate learning while ensuring learning stability. Our approach provides a neural inverse kinematics solution that increases over time its performance regarding the execution time while focusing on those areas of the Cartesian space where targets are often placed by the human operator, thus enabling efficient learning. We evaluate reward shaping and augmented targets as strategies for accelerating deep reinforcement learning and analyze the learning stability.


  author       = "Beik-Mohammadi, Hadi and Zamani, Mohammad Ali and Kerzel, Matthias and Wermter, Stefan",
  title        = "Mixed-Reality Deep Reinforcement Learning for a Reach-to-grasp Task",
  booktitle    = "Artificial Neural Networks and Machine Learning  ICANN 2019",
  pages        = "611-623",
  month        = "Sep",
  year         = "2019",
  doi          = "10.1007/978-3-030-30487-4_47",
  url          = "https://www2.informatik.uni-hamburg.de/wtm/publications/2019/BZKW19/ICANN_2019_MixedReality_RL.pdf"

» Hadi Beik-Mohammadi
» Mohammad Ali Zamani
» Matthias Kerzel
» Stefan Wermter