A study of Space/Sound Perception by using a Generative Deep Learning approach

A01A study of Space/Sound Perception by using a Generative Deep Learning approach

A “cognitive map” is represented by place cells in the hippocampus region found in rats brain. In recent years, the relation between cognitive maps and the place cells has been studied intensively, and the dynamics of self-organization of cognitive map has been gradually revealed. In this project, by using a recent technique of generative deep learning techniques  such as DCGAN  (Deep Convolutional Generative Adversal Network) or others, we analyze the relationship between the autonomous movement of a subject and the generated cognitive map. As a data set of GANs, we show that the temporal transitions of the visual images are encoded in the latent space in some non trivial ways. By using the method of Auto Encoder + GAN, we have obtained results suggesting that the place cell behaviors can be re-constructed by the generative model.

Apart from this, we also developed a system to generate multi-scale sound by using the same method. We are also interested in studying “vicarious trial and error (VTE)” phenomena found in rats and to study relationships between VTE and replay and pre-play processes of the place cells.  By studying all these phenomena, we hope to clarify the difference between biological and AI cognitive phenomena. Some of these works were presented at the annual meeting for Japan Society of Artificial Intelligence and Japan Physics Society and also the European Conference for Artificial Life 2017.

Researcher

  • Takashi Ikegami

    Project Leader

    Takashi Ikegami

    University of Tokyo

    Professor

    WEBSITE

ページトップへ