Neural Representation of Internal Model Based Decision Making

A01Neural Representation of Internal Model Based Decision Making

 A different environment requires a different code of conduct, but when you are placed into a completely unfamiliar environment, how do you learn to act accordingly? Reinforcement learning theory tells us that there are two powerful ways to solve the decision making task, model-free and model-based strategies. When an animal encroached into an unfamiliar environment, it initially has no knowledge of the environmental dynamics. Nevertheless, the animal must find a food or water for the survival. To gain reward, it should rely on the model-free mechanism at the beginning; learn the action value directly from the reward prediction error. As the animal learn the dynamics of the environment, they can use them to infer the action value to survive in the changing yet predictable environment.
The environmental dynamics changes as time and place changes. In this ever-changing world, adaptively combining the model-based and model-free strategies is crucial for the survival of both the brain and the machine intelligence. In our previous efforts in this Brain-AI research area, we showed that mice can be a good animal model to study the “internal-model” based decision making. They learn that the number of rewards is limited. In this research, we aim to elucidate the neural mechanism of decision making that make transitions between model-based and model-free mechanisms by using two-photon calcium imaging in the frontal cortex of the mice.


  • Kosuke Hamaguchi

    Project Leader

    Kosuke Hamaguchi

    Kyoto University Graduate School of Medicine, Department of Biological Sciences

    Senior Lecturer