Internal model construction by observing others and policy acquisition through self-learning

A02Motor control and behaviors Internal model construction by observing others and policy acquisition through self-learning

 The number of studies on big data analysis for image processing or speech recognition continues to rise due to the rapid increase in the size of available data and the rapid improvement of computer performances. On the other hand, how to utilize such analysis for robot control remains unclear since acquiring large-scale data is not easy with actual robots. In our group, we aim to develop an algorithm that efficiently constructs an internal simulation model of a humanoid robot dynamics from observed human behaviors. Since robots can virtually interact with environments by using the internal simulation model, large amounts of data can be generated and high-performance computational resources can be fully utilized to efficiently acquire a policy for humanoid robot control. We focus on the following two research topics:

1. Learning internal models by observing others
To efficiently derive an internal simulation model from observed human behaviors, we combine a parameter estimation method for humanoid robots and a model-identification method to extract a compact representation of humanoid dynamics for motor control.
2. Designing interaction between internal model learning and reinforcement learning systems
We continuously update the internal model through collecting data from a real environment. The learned internal model is used to update policies in a reinforcement-learning framework. We design stable and efficient interaction between these two learning systems for motor learning from a computational neuroscience point of view.

Researcher

  • Jun Morimoto

    Project Leader

    Jun Morimoto

    ATR Brain Information Communication Research Laboratory Group

ページトップへ