Full Record

New Search | Similar Records

Title Hierarchical policy design for sample-efficient learning of robot table tennis through self-play
Publication Date
Date Accessioned
Degree PhD
Discipline/Department Computer Science
Degree Level doctoral
University/Publisher University of Texas – Austin
Abstract Training robots with physical bodies requires developing new methods and action representations that allow the learning agents to explore the space of policies efficiently. This work studies sample-efficient learning of complex policies in the context of robot table tennis. It incorporates learning into a hierarchical control framework using a model-free strategy layer (which requires complex reasoning about opponents that is difficult to do in a model-based way), model-based prediction of external objects (which are difficult to control directly with analytic control methods, but governed by learnable and relatively simple laws of physics), and analytic controllers for the robot itself. Human demonstrations are used to train dynamics models, which together with the analytic controller allow any robot that is physically capable to play table tennis without training episodes. Using only about 7000 demonstrated trajectories, a striking policy can hit ball targets with about 20 cm error. Self-play is used to train cooperative and adversarial strategies on top of model-based striking skills trained from human demonstrations. After only about 24000 strikes in self-play the agent learns to best exploit the human dynamics models for longer cooperative games. Further experiments demonstrate that more flexible variants of the policy can discover new strikes not demonstrated by humans and achieve higher performance at the expense of lower sample-efficiency. Experiments are carried out in a virtual reality environment using sensory observations that are obtainable in the real world. The high sample-efficiency demonstrated in the evaluations show that the proposed method is suitable for learning directly on physical robots without transfer of models or policies from simulation.
Subjects/Keywords Robotics; Table tennis; Self-play; Reinforcement learning; Hierarchical policy
Contributors Miikkulainen, Risto (advisor); Levine, Sergey (committee member); Sentis, Luis (committee member); Niekum, Scott (committee member); Mok, Aloysius (committee member)
Language en
Country of Publication us
Record ID handle:2152/72812
Repository texas
Date Retrieved
Date Indexed 2019-09-12
Grantor The University of Texas at Austin
Issued Date 2019-02-01 00:00:00
Note [department] Computer Sciences;

Sample Images