Poster No. Title Poster

Learning long-term motor timing and patterns using modular reservoir computing

Yuji Kawai, Jihoon Park, Ichiro Tsuda, and Minoru Asada


The brain generates a large variety of spatiotemporal patterns with specific timings based on both input-induced and spontaneous activity, which is essential for motor learning and temporal processing. In a motor timing task, an animal actively generates motions at a given interval ranging from tens of milliseconds to tens of seconds after an onset cue, which is mainly involved with the basal ganglia and cerebellum. To accomplish the motor timing task using a random neural network, that is, reservoir computing, the neural activity induced by the onset signal must be sustained during the interval without any inputs. Therefore, a reservoir network should spontaneously generate the activity; however, such high-gain network is associated with chaoticity that exhibits orbital instability, resulting in unsuccessful learning. Intermediate dynamics, that is, limit cycles and tori, between chaotic state and equilibrium stable state are required, which are known to be obtained by reducing the network size. We propose a simple system that learns an arbitrary time series as the linear sum (readout) of stable trajectories produced by several small network modules. The modules whose recurrent connection weights are random are connected independently and in parallel. One output unit is randomly chosen from each module, and the system output is obtained as the readout of their module-output trajectories. Output weights are trained with the recursive least squares to minimize the error between the output and a target signal. Important finding in simulations is that the module-output trajectories were orthogonal to each other, which created a dynamic orthogonal basis acquiring a high representational capacity. As the result, the system was able to learn the timing of extremely long intervals, such as tens of seconds for a millisecond computation unit. In addition, it could replicate the chaotic time series of Lorenz attractors with high accuracy. This self-sustained system satisfies both of stability and orthogonality requirements and thus provides a new neurocomputing framework and perspective for the neural mechanisms of motor learning. Further mathematical theorization on how the trajectories are orthogonal, and their applications are expected in the future.
Acknowledgments: This work was supported by JST, CREST (JPMJCR17A4) and the project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).


Quasiparametric Networks with Fast Weight Memory for Spatial Search

Alex Baranski, Jun Tani


Most artificial neural networks (ANNs) are either discrete-time dynamical systems (such as LSTMs) or continuous-time dynamical systems (such as CTRNNs). In both cases, time is implicit. However, explicit representations of time (and space) are known to exist in the brain, and to be important for behavior and cognition. ANNs that are parametric functions (such as SIRENs) permit explicit representation of space and time, but have no state-dependency. To combine the benefits of state-dependency and parameter-dependency, we introduce quasiparametric networks (QPNs), which make a state prediction at a parametric (space and/or time) displacement relative to the input state. This allows us to make spatiotemporally distant predictions in a single forward pass through the network, avoiding BPTT. We demonstrate the utility of this system in a spatially continuous maze, where the agent must search for an object. We use space-parametric QPNs to model grid cells, and show how they can be used to continuously index a fast weight memory system to bidirectionally link sensory data to spatial locations using a simple Hebbian learning rule. The spatial hierarchy of the grid cells allows us to do hierarchical search of the memory system. We additionally use time-parametric QPNs to model the consequences of actions taken by the agent. The properties of QPNs allow us to directly optimize actions to achieve a certain temporally distant outcome.


Goal-Directed Planning by Reinforcement Learning and Active Inference

Dongqi Han, Kenji Doya, Jun Tani


What is the difference between goal-directed and habitual behavior? We propose a novel computational framework of decision making with Bayesian inference, in which everything is integrated as an entire neural network model. The model learns to predict environmental state transitions by self-exploration and generating motor actions by sampling stochastic internal states z. Habitual behavior, which is obtained from the prior distribution of z, is acquired by reinforcement learning. Goal-directed behavior is determined from the posterior distribution of z by planning, using active inference which optimizes the past, current and future z by minimizing the variational free energy for the desired future observation constrained by the observed sensory sequence. We demonstrate the effectiveness of the proposed framework by experiments in a sensorimotor navigation task with camera observations and continuous motor actions.


Stability and Scalability of Node Perturbation Learning

Naoki Hiratani, Yash Mehta, Timothy P Lillicrap, Peter E Latham


To survive, animals must adapt synaptic weights based on external stimuli and rewards. And they must do so using local, biologically plausible, learning rules — a highly nontrivial constraint. One possible approach is to perturb neural activity (or use intrinsic, ongoing noise to perturb it), determine whether performance increases or decreases, and use that information to adjust the weights. This algorithm — known as node perturbation — has been shown to work on simple problems, but little is known about either the stability of its learning dynamics or its scalability with respect to network size. We investigate these issues both analytically, in deep linear networks, and numerically, in deep nonlinear ones. We show analytically that in deep linear networks, both learning time and performance depend very weakly on hidden layer size. However, unlike stochastic gradient descent, when there is model mismatch, node perturbation is always unstable. The instability is triggered by weight diffusion, which eventually leads to very large weights. This instability can be suppressed by weight normalization, at the cost of bias in the learning rule. We confirm numerically that a similar instability, and to a lesser extent scalability, exist in deep nonlinear networks trained on both a motor control task and an image classification task. Our study highlights the limitations and potential of node perturbation as a biologically plausible learning rule in the brain.


On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning

Shiro Takagi


We empirically investigate how pre-training on data of different modalities, such as language and vision, affects fine-tuning of Transformer-based models to Mujoco offline reinforcement learning tasks. Analysis of the internal representation reveals that the pre-trained Transformers acquire largely different representations before and after pre-training, but acquire less information of data in fine-tuning than the randomly initialized one. A closer look at the parameter changes of the pre-trained Transformers reveals that their parameters do not change that much, and the catastrophic performance of the model pre-trained with image data comes from too large gradients. To study what information the Transformer pre-trained with language data utilizes, we fine-tune this model with no context provided, finding that the model learns efficiently even without context information. Subsequent follow-up analysis supports the hypothesis that pre-training with language data is likely to make the Transformer get context-like information and utilize it to solve the downstream task.


Detecting danger in gridworlds using Gromov’s Link Condition

Thomas F Burns, Robert Tang


Gridworlds have been long-utilised in AI research, particularly in reinforcement learning, as they provide simple yet scalable models for many real-world applications such as robot navigation, emergent behaviour, and operations research. We initiate a study of gridworlds using the mathematical framework of reconfigurable systems and state complexes due to Abrams, Ghrist & Peterson. State complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. The main contribution of this work is a modification to the original Abrams, Ghrist & Peterson setup which we introduce to capture agent braiding and thereby more naturally represent the topology of gridworlds. With this modification, the state complexes may exhibit geometric defects (failure of Gromov’s Link Condition). Serendipitously, we discover these failures occur exactly where undesirable or dangerous states appear in the gridworld. Our results therefore provide a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents, and offer useful safety information (in geometric and topological forms) for incorporation in or analysis of machine learning systems. More broadly, our work introduces tools from geometric group theory and combinatorics to the AI community and demonstrates a proof-of-concept for this geometric viewpoint of the task domain through the example of simple gridworld environments.


Nys-Newton:  Nystr\”om approximated Newton-sketch for Stochastic Optimization

Hardik Tankaria, Dinesh Singh, and Makoto Yamada


Brain MRI is the most standard test for the diagnosis of various brain diseases. Given complexity in the diagnosis process, researchers are shifting towards the deep neural networks. First order optimizers are the most preferable choice in deep learning. However, with the limited sample sizes, it is difficult to train a stable and generalized model with large number of parameters using first order optimizers. Second-order optimization methods are the most widely used for convex optimization problems and known for their robustness. However, limited efforts were made on second-order optimization methods for non-convex problems such as deep neural networks. In this study, we propose an approximate Newton sketch-based stochastic optimization algorithm for deep neural networks. Specifically, we compute a partial column Hessian of size ($d\times m$) with $m\ll d$ uniformly random selected variables ($d$ is number of parameters), then use the \emph{Nystr\”om method} to better approximate the full Hessian matrix. To further reduce the computational complexity per iteration, we directly update the iterate ($\Delta w$) without computing and storing the full Hessian or its inverse. We then  integrate our approximated Hessian with stochastic gradient descent and stochastic variance-reduced gradient methods. The results of numerical experiments on both convex and non-convex approach were able to obtain a better approximation of Newton’s method, exhibiting performance competitive with that of state-of-the-art first-order and stochastic quasi-Newton methods. Furthermore, we provide a theoretical convergence analysis for convex problems.


Making information representations of deep neural networks more brain-like via models for brain-response prediction

Kiichi Kawahata, Jiaxin Wang, Antoine Blanc, Shinji Nishimoto, and Satoshi Nishida


Deep neural networks (DNNs) have recently achieved splendid pattern-recognition performance comparable with or, in some cases, better than human one. However, information representations of such DNNs still have non-negligible differences from those of the human brain with regard to, for example, visual object processing (Raman & Hosoya, 2020; Xu & Vaziri-Pashkam, 2021). This may cause DNN’s behavioral characteristics in pattern recognition that are distinctive from human ones (Goodfellow et al., 2015; Geirhos et al., 2018). Therefore, to improve the humanness of DNN’s recognition behaviors, which is an important factor for realizing human-centered artificial intelligence, we should make DNN’s information representations closer to brain ones. We address this issue by introducing a method for transforming DNN’s internal representations into brain-like ones via models for brain-response prediction. In our method, computational models are constructed to predict brain response to arbitrary audiovisual or linguistic inputs from the combination of DNN’s activation patterns to the inputs and the history of preceding brain response. Importantly, after brain measurement with functional MRI for only several hours is used for the model construction, no additional measurement is required. To validate this method, we performed a variety of audiovisual and linguistic pattern recognition tasks using DNN’s representations transformed into brain-response representations via the models. We then compared the recognition patterns with when the representations were not transformed, in terms of the similarity with the recognition patterns directly using measured brain response (i.e., brain decoding); if the representations are closer to brain ones, the recognition patterns should be more similar to those of brain decoding. Our results showed that the similarity with brain decoding became higher when DNN’s representations were transformed than when they were not transformed in both the audiovisual and linguistic tasks. This finding suggests that the transformation into brain-response representations successfully makes DNN’s representations closer to brain ones independently of input modality. Our method has a great potential to improve the humanness of DNN’s behavior in various types of pattern recognition.


Data Science and Imaging Informatics of Precision Medicine

Yuichi Motai


Research in data science is a multi-disciplinary field with the generation of information and knowledge from heterogeneous data sources. The focus of this presentation is to provide illustrative examples on precision medicine from diverse clinical areas of 1) radiology, 2) radiation oncology, and 3) surgery. The PI will employ a variety of quantities and computational methods on classification and prediction for large-scale databases, specifically: 1) cloud CT colonography; 2) personalized radiation therapy for lung tumors; and 3) heterogeneous analysis to image-guided biopsy. Findings from these studies have a direct impact on patient care, cost savings and efficiency for cancer diagnosis and treatment.  The PI covers in the poster the recent artificial intelligence technologies to address broad medical testbeds and the next software engineering directions.


Development of a Data-driven Prediction Model for the Evolution of White Matter Hyperintensities using Deep Learning

Muhammad Febrian Rachmadi, Maria del C. Valdés-Hernández, Stephen Makin, Joanna M. Wardlaw, Taku Komura, Henrik Skibbe


White matter hyperintensities (WMHs) are neuroradiological features often seen in T2-FLAIR brain MRI, characteristic of small vessel disease (SVD), and associated with stroke and dementia progression. Clinical studies indicate that the volume of WMHs on a patient may shrink, remain unchanged, or grow over time. We call this the “evolution of WMH”. Predicting the evolution of WMHs is challenging because the rate and direction of WMHs’ evolution vary considerably across individual brains, individuals, and clinical studies.

In this study, we introduce ours works that have been done on developing deep learning models for predicting these dynamic changes of WMHs, and the challenges ahead. To benchmark our models, we predicted the evolution of WMHs based on one single brain MRI scan sequence from each patient. We used brain MRI data from stroke patients enrolled in a study of stroke mechanisms, imaged by a GE 1.5T scanner at three time points (baseline scan, about 3 months, and a year after). We tested different input modalities ranging from a T2-FLAIR MRI, a probability map (i.e., output from a deep learning model for WMHs segmentation), and an irregularity map (i.e., output from an unsupervised model for WMHs segmentation called LOTS-IM).

Our studies have shown that (1) incorporating risk factors of WMHs evolution and (2) modeling uncertainty are important to improve the prediction of the evolution of WMHs.
(1) Several factors have been indicated by clinical studies to be associated with the evolution of WMHs, such as baseline WMHs volume and presence of stroke lesions. Thus, we incorporated such factors as additional model inputs.
(2) Furthermore, our study exposed that predicting the evolution of WMHs involves some levels of uncertainty, especially when predicting the areas of shrinking and growing WMHs. The uncertainty is due to the difficulty of distinguishing textures/intensities of shrinking and growing WMHs from the T2-FLAIR brain MRI sequence.

Our recent model, equipped with a conditional variational autoencoder for modelling the uncertainty and additional inputs for incorporating clinical risk factors, has achieved the best performance compared to the other deep learning models, including the vanilla deep learning model of U-Net.


YAB -Yet Another Brain: A New Framework for Brain Simulation

Minoru Owada,Gutierrez Carlos enrique,Atsuya Tange,Kazutaka Izumi,Yuko Ishiwaka


Brain simulation is becoming an important tool in neurosciences and medicine, since it is useful for testing hypotheses and making new theories. Here we introduce YAB , a new framework for brain simulation, which organizes neural networks as graphs in a database.

YAB basic unit is a node. It stores attributes and methods (programs) that define its dynamics. Nodes can be used for modeling neural systems at different resolutions. For example, several nodes can incorporate methods for modeling dendrites, soma and axonal dynamics of Hodgkins-Huxley equations; or a single node can describe the membrane potential of a spiking neuron, or the dynamics of a neuronal ensemble. While current simulators support specific levels of details for modeling, YAB enables the implementation of methods at different complexities, depending on the requirements.

Another YAB component is an edge. Edges connect nodes, arranging the network connectivity and supporting the hierarchical structure of biologically inspired brain models. Nodes and edges use a unique key-value data structure.

YAB models are stored automatically in NDS (Neuro Data Store), a dedicated graph database. This enables modifying and examining models on running time, neurons and connections can be created and deleted during simulation, and users can search and extract subsets of data for analysis. After simulation, the graph can be reloaded from NDS to memory. Furthermore, in our framework, the model scale is no longer constrained by the computers’ physical memory size.

Besides that, simulations of combined models, like brain and body, may require different tools. The integration of such tools may be sophisticated and technically difficult. In YAB, integration between models is natural, since nodes implement different methods and interact within a common graph.

Our framework supports parallel processing and aims to contribute to the advancement of brain simulations in computational neuroscience and related fields


Spiking neural networks deployment on graph database. A new framework for brain simulation.

Carlos Gutierrez, Minoru Owada, Kenji Doya, Yuko Ishiwaka


Models of the brain, at cellular level, are commonly built as networks of interacting differential equations describing the changes of neuronal features like membrane potentials. Simulations of such networks treat programs and generated data as separated components. Here we introduce the deployment and simulation of spiking neural networks on a database, by using a new framework for brain simulation called YAB (Yet Another Brain). The framework organizes neural networks as graphs stored in a database. Neurons are represented by nodes that integrate data and programs, such as fixed parameters, variables and methods. Whereas edges, arrange the network connectivity and support the hierarchical structure of biologically inspired models. We created template nodes for spiking neuron models and built alpha function-based synapses. Then, we constructed spatially organized neural populations, assembled them as neural networks using connection routines, and performed systematic simulations. For parameters initialization, we used the specifications of sample models at SNNbuilder, a compatible data-driven brain modeling tool that arranges anatomical and physiological features reported in scientific papers. Furthermore, for an efficient signal propagation over the graph, neuronal action-potentials were assumed as events triggered by the source node and transmitted to target nodes after an axonal delay, avoiding node’s reading of presynaptic signals at every step of the simulation. The framework allows the building of different neuron and synapse models, by developing methods of diverse complexity at the graph nodes. Moreover, nodes running different models, such as physical, sensorial, mechanical, and other systems, can be integrated for combined and more realistic simulations.

Conference Center lobby
Poster No. Title Poster

Elucidation of neural mechanisms underlying behavioral variability induced by activation of single sensory neuron in C. elegans

Hironori J. Matsuyama, Shunji Nakano and Ikue Mori



Unlike computers operating on deterministic logic, animals respond to surrounding environments in stochastic manners. Identifying neural basis of the stochasticity in information processing, which differentiates nervous systems from computers, provides insights into the logic behind biological computation in living organisms. The nematode Caenorhabditis elegans (C. elegans) is a suitable model system to reveal a fundamental form of the stochastic computations in living organisms at single-cell level, due to its nervous system consisting of identifiable 302 neurons with fully solved connectivity (White et al, Phil. Trans. R. Soc. Lond. B, 1986; Cook et al., Science, 2019). Previous studies on sensory-evoked behavior and neural activities in C. elegans reported that mechanosensory signals elicit variable behavior depending on animals’ behavioral context (Liu et al., eLife, 2018), and a class of interneurons shows probabilistic responses to odor stimuli depending on the state of neural network dynamics (Gordus et al., Cell, 2015). However, how the variability in sensory-evoked behavior emerges and is controlled in the circuitry remain to be elucidated. To address such a question, we investigate how a stimulation within a single sensory neuron is converted into multiple behavioral responses by using C. elegans thermosensation as a model paradigm. To subject animals to uniform sensory stimuli among experimental trials, we stimulated AFD thermosensory neurons by optogenetics. We revealed that in immobilized animals, stimulation of thermosensory neuron AFD via blue-light-gated cation channel CoChR (Klapoetke et al., Nat. Methods, 2014; Schild and Glauser, Genetics, 2015) enabled to induce deterministic neural activity in AFD thermosensory neurons. When the animals were subjected to this photo-stimulation of the single AFD neuron under freely moving condition, they appeared to exhibit variability in behavioral responses. This result suggests that a deterministic sensory signal in AFD thermosensory neuron is converted into stochastic behavioral responses in C. elegans nervous system. We are currently attempting to conduct an experiment, in which we simultaneously monitor AFD photoactivation-evoked behavior and AFD calcium responses, while tracking a freely-moving animal.


Evidence and reward bias representation in mouse auditory cortex during perceptual decision making

Kotaro Ishizu, Shosuke Nishimoto, Akihiro Funamizu


When making decisions, subjects choose the option with high expected reward or high action value. In a two-alternative forced choice task, recent study shows that the action value is estimated not only from the amount of reward (prior knowledge) but also from the certainty of sensory cues. This study investigates how the integration of prior knowledge and sensory inputs is carried out in the brain.
Six wild-type CBA/J mice were subjected to a tone-frequency discrimination task in which a high- or low-frequency tone was associated to a left- or right-spout water reward. The task changed the amount of rewards in every 90 to 120 trials (block) and presented either a long- or short-tone in each trial. Consistent with our previous study (Funamizu, iScience, 2021), mice changed their choices based on both the reward amounts and tone durations: the choices were less accurate and more biased by the asymmetric reward block in short- than in long-stimulus trials.
During the task, we electrophysiologically recorded the neuronal activity of auditory cortex with Neuropixels 1.0. Analyses of single-unit activities showed that the auditory cortical neurons changed the firing rates by the stimuli evidence, indicating their sound encoding. Some neurons also changed the activity by the asymmetric reward blocks. Because the reward modulation of activity was observed both in the positive and negative directions in terms of the preferred sound tuning, the population sound encoding became stable across blocks. So far, our results suggest that the auditory cortex plays a role in representing sensory cues and transmitting them to upstream areas; the integration of prior knowledge and sensory inputs are performed in outside of the auditory cortex. We are currently recording the activity of medial prefrontal cortex and the secondary motor cortex to investigate whether these two areas integrate the prior knowledge of reward and sound inputs.


Contribution of mouse dorsal cortex in tone frequency discrimination task

Kakuma Hata, Akihiro Funamizu


When sensory inputs are uncertain, the ability to infer the environmental state by using prior knowledge of rewards and stimulus probabilities is essential for adaptive behavior. Our previous study (Funamizu, 2021) found that mice integrated sensory inputs and prior knowledge to optimize choices and changed the behavior based on sensory uncertainty. Although some studies report that multiple regions in the cerebral neocortex (e.g., posterior parietal cortex, entorhinal cortex, and frontal cortices) are involved in the reward- and sensory-based decision making, it is still unclear how these regions in the neocortex are orchestrated and make Bayesian action selection. Here we utilize photostimulation in a previous study (Guo et al., 2014) and inactivate each region of dorsal neocortex in the VGat-ChR2 mice during a behavioral task. Since the VGat-ChR2 mice expressed channelrhodopsin-2 (ChR2) in GABAergic interneurons, a blue laser produced local inactivation of cortex. Our behavior task required head-fixed mice to choose between the right or left spout associated with the high or low tone frequency. Compared to our previous task (Funamizu, 2021), we only used the sound stimulus of 0.6 sec and did not bias the reward amount of left and right choices. Consistent with the previous study (Guo et al., 2014), lateralized inactivation of the anterior lateral motor cortex (ALM) made the mice choose the ipsilateral side of inactivation. Also, inactivation of the auditory cortex (AC) had a similar effect to the choices, suggesting that the ALM and AC are involved in the contralateral choice. Further experiments with various sound durations and reward biases in our task would provide how the neocortex is involved in the integration of sensory inputs and prior knowledge.


Evidence and reward bias representation in mouse auditory cortex during perceptual decision making

Kotaro Ishizu, Shosuke Nishimoto, Akihiro Funamizu


When making decisions, subjects choose the option with high expected reward or high action value. In a two-alternative forced choice task, recent study shows that the action value is estimated not only from the amount of reward (prior knowledge) but also from the certainty of sensory cues. This study investigates how the integration of prior knowledge and sensory inputs is carried out in the brain.
Six wild-type CBA/J mice were subjected to a tone-frequency discrimination task in which a high- or low-frequency tone was associated to a left- or right-spout water reward. The task changed the amount of rewards in every 90 to 120 trials (block) and presented either a long- or short-tone in each trial. Consistent with our previous study (Funamizu, iScience, 2021), mice changed their choices based on both the reward amounts and tone durations: the choices were less accurate and more biased by the asymmetric reward block in short- than in long-stimulus trials.
During the task, we electrophysiologically recorded the neuronal activity of auditory cortex with Neuropixels 1.0. Analyses of single-unit activities showed that the auditory cortical neurons changed the firing rates by the stimuli evidence, indicating their sound encoding. Some neurons also changed the activity by the asymmetric reward blocks. Because the reward modulation of activity was observed both in the positive and negative directions in terms of the preferred sound tuning, the population sound encoding became stable across blocks. So far, our results suggest that the auditory cortex plays a role in representing sensory cues and transmitting them to upstream areas; the integration of prior knowledge and sensory inputs are performed in outside of the auditory cortex. We are currently recording the activity of medial prefrontal cortex and the secondary motor cortex to investigate whether these two areas integrate the prior knowledge of reward and sound inputs.


Separation of inference-based and model-free strategy in head-fixed mice based on a behavioral task

Shuo Wang, Kotaro Ishizu, Akihiro Funamizu


Humans and animals not only habitually make decisions based on direct experiences, but also infer a hidden context based on sensory inputs for flexible decisions. The inference-based strategy has an advantage to use the transition of context for action selection, whereas the model-free strategy responds rapidly from the current environment. It is unclear how the two strategies integrate in the brain. Here, we established a tone-frequency discrimination task in head-fixed mice to investigate their selection ability of inference-based and model-free strategies. The task had zigzag- and repeat-condition in which the tone frequency of current trial was controlled by a transition probability from previous trial. Mice selected left or right spout depending on the low or high frequency of sound stimulus to get a water reward. We found that mice biased choices depending on the transition probability of stimulus both in the zigzag- and repeat-conditions. Also, the learning speed of mice in repeat-condition was faster than in zigzag-condition, suggesting that mice had different strategies between the two conditions. Further electrophysiological experiments in our task may provide a neuronal mechanism of inference-based and model-free strategies.


Modeling mouse behavior during decision-making task by reservoir computing

Yutaro Ueoka, Shuo Wang, Kotaro Ishizu, Akihiro Funamizu


How do animals adapt their behavior to survive in a constantly changing environment? One approach to answer this question is to model the brain that generates the movement of animals. In a decision-making paradigm, recent studies have found that brain activity is highly influenced by whole-body movement. Therefore, we used reservoir computing to model the brain that drive whole-body movements during a decision-making task. Reservoir computing is a type of Recurrent Neural Network that allows real-time inputs and outputs. It requires updating output weights but not weights of internal recurrent network.

A tone-frequency discrimination tasks were performed to water-restricted mice. Mice chose the left or right spout after listening sound stimuli compromised of various frequencies. Mice were rewarded with water when they lick the right spout if the presented sound stimulus is high-frequency, or the left spout if low-frequency. During the tone-frequency discrimination task, the whole-body movement of the mouse was captured by four cameras located in front, back, left and right side of mouse. First, the body movements in these videos were analyzed with Deeplabcut (a deep learning-based body tracking software) and extracted the trajectory of 68 major body coordinates. We performed principal component analysis on the 68 components and found that 20 principal components (PCs) explain approximately 90% of all the trajectories. We trained a reservoir computing to predict the 20 PCs from the environmental inputs of sounds, spout movements, and rewards.

Our preliminary result showed that the reservoir computing succeeded to model the average trajectory of mouse body movement. However, the model did not follow the trial-by-trial change of body movement. Further analyses are needed to construct a brain-like network model generating behavior.


Exploring to decode auditory spatial attention by eye-metrics

Hsin-I Liao, Shigeto Furukawa


Our previous study has implied that we can infer different aspects of auditory spatial attention processing from different eye responses (Liao, Fujihira, Yamagishi, & Furukawa, 2019). Specifically, pupillary responses reflect sustained goal-planning processing of auditory attentional shift, and microsaccades reflect a transient orienting response to the attended auditory object. However, it is unclear to what extent we can accurately predict the focus of auditory spatial attention at a single trial level with a new observer. In the current study, we adopted a linear discriminant analysis (LDA) approach to estimate the attended auditory object’s direction by using various eye-metrics measurements, including pupil size, microsaccades, gaze positions, and their combination. The LDA model was trained and tested following the leave-one-out cross-validation procedure. Results showed that when the model was trained using all eye-metrics measurements, mean classification accuracy was around 60%, significantly higher than the change level but not optimal. Depending on the conditions, pupil size alone or gaze position alone could reach a similar degree of prediction accuracy. In contrast with our previous finding, microsaccade alone did not show reliable prediction accuracy based on a single trial. We seek possible limitations and methods to improve the prediction accuracy.

Reference: Liao, H.-I., Fujihira, H., Yamagishi, S., & Furukawa, S. (2019). Microsaccades and pupillary responses represent the focus of auditory attention. Journal of Vision, 19(10):273b. doi:


Effect of the changes in the arm physical parameters on the minimum torque-change trajectories of human reaching movements

Kotaro Muramatsu, Naomichi Ogihara


The minimum torque-change model is known as a computational model describing a formation of a point-to-point reaching movement in humans (Uno et al., 1989). This model predicts roughly straight end-point trajectories with bell-shaped velocity profiles that are very similar to actual human reaching trajectories using the arm physical parameters provided in the original paper. However, the minimum-torque change criterion is essentially a dynamic quantity and the calculated trajectories could be, at least to some extent, affected by the changes in the physical parameters of the arm. In the present study, we investigated how the changes in the arm physical parameters altered the optimal arm trajectories calculated based on the minimum torque-change criterion. Some optimal trajectories were found to be largely curved when the arm physical parameters were altered so that the arm model was biomechanically more similar to that of actual human. Our results possibly suggested that the human reaching trajectory is not planned according to the minimum torque-change criterion.


Rapid learning of predictive maps with STDP and theta phase precession

Tom M George, William de Cothi, Kimberly Stachenfeld, Caswell Barry


The predictive map hypothesis is a promising candidate principle for hippocampal function. A favoured formalisation of this hypothesis, called the successor representation,  proposes that each place cell encodes the expected state occupancy of its target location in the near future. This predictive framework is supported by behavioural as well as electrophysiological evidence and has desirable consequences for both the generalisability and efficiency of reinforcement learning algorithms. However, it is unclear how the successor representation might be learnt in the brain. Error-driven temporal difference learning, commonly used to learn successor representations in artificial agents, is not known to be implemented in hippocampal networks. Instead, we demonstrate that spike-timing dependent plasticity (STDP), a form of Hebbian learning, acting on temporally compressed trajectories known as “`theta sweeps”, is sufficient to rapidly learn a close approximation to the successor representation. The model is biologically plausible – it uses spiking neurons modulated by theta-band oscillations, diffuse and overlapping place cell-like state representations, and experimentally matched parameters. We show how this model maps onto known aspects of hippocampal circuitry and explains substantial variance in the true successor matrix, consequently giving rise to place cells that demonstrate experimentally observed successor representation-related phenomena including backwards expansion on a 1D track and elongation near walls in 2D. Finally, our model provides insight into the observed topographical ordering of place field sizes along the dorsal-ventral axis by showing this is necessary to prevent the detrimental mixing of larger place fields, which encode longer timescale successor representations, with more fine-grained predictions of spatial location.


Coupling modulation facilitates accurate prediction generation in cortical substrate

Gaston Sivori and Tomoki Fukai


Many neural theories have been proposed to explain how evidence shapes an internal model of the external world, but how predictions are computed and errors propagated at the cell level remains elusive. Here we show that a biologically-realistic Hebbian plasticity rule that reports spiking activity to inputs is fundamentally the key computational requirement, and as evidenced by postsynaptic calcium, we further model Ca+2 dynamics as the error message. Our results suggests that pyramidal cells (PC) compute accurate predictions based on their current synaptic structure and recent somatic activity, and propagate internal errors by modulating the somato-dendritic coupling.


Semi-supervised contrastive learning for semantic segmentation of ISH gene expression in the marmoset brain

Charissa Poon, Muhammad Febrian Rachmadi, Michal Byra, Tomomi Shimogori, Henrik Skibbe


Gene expression brain atlases such as the Allen Mouse Brain Atlas are widely used in neuroscience research. Such atlases in lower order model organisms have led to great research achievements, but interspecies differences in brain structure and function point at the need for characterizing gene expressions in the primate brain. The Marmoset Gene Atlas, created by the Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) project in Japan, is an in situ hybridization (ISH) database of gene expression in the marmoset brain. The goal of our work is to create a deep learning model to automatically segment gene expression from ISH images of the adult marmoset brain.

Expression patterns of over 2000 different genes can be labelled and visualized using ISH. To characterize gene expression in brain images, ISH signals must be labelled and segmented. Expression intensity and localization can then be analyzed using image processing methods. Deep learning techniques have been widely applied for the segmentation of images on a per-pixel level, known as semantic segmentation. Supervised architectures such as the U-Net have led to impressive segmentation results, but require large labelled training datasets, which are expensive to obtain. Furthermore, in histological images, image variations caused by factors such as tissue preparation and image acquisition methods have been found to profoundly influence outputs from deep learning models, at times more than the signal itself. The ideal model for gene segmentation of the ISH marmoset brain data would require minimal to no labelling and produce consistent segmentations regardless of changes in image hue, brightness, or contrast.  

We use a contrastive learning based self-supervised framework in order to create semantic segmentations of gene expressions in the adult marmoset brain. In contrastive learning, the model is trained in latent space, such as by maximizing agreement between the features of different augmented views of the same unlabeled image, or between the features of a labelled image and the model’s encoded representations of the unlabeled equivalent. We first create a small labelled ‘champion’ dataset of easily segmented gene expression brain images, which is then used to train a model to segment more difficult images, such as ones in which the background signal intensity is nonuniform. We propose using a wide range of augmentations to generate strongly perturbed images to account for a range of differences in image profiles. We show an example of a gene that has been fully segmented and mapped to a common 3D template of the marmoset brain. We hope that this work can be used for the segmentation of fine-detailed structures in biomedical images and assist in advancing primate brain research. This work was supported by the Brain/MINDS project from the Japan Agency for Medical Research and Development AMED (JP21dm0207001).


Artificial intelligence to analyze brain proteomics data from living psychiatric patients.

Zacharie Taoufiq, Tomasz M Rutkowski, Mihoko Otake-Matsuura, Tomoyuki Takahashi


Understanding the underlying molecular mechanisms in human diseases is important for precision diagnosis and treatment design of complex conditions such as brain disorders.
     Brain sampling on living individual is key but unethical. However, we have recently developed a non-invasive/harmless dissection of brain synapses from living psychiatric patients, by combining stem cell reprogramming technology and proteomics approaches. We called this new technology: ‘Personalized Synapse Proteomics’ (PSP). For the first time, PSP can access molecular information of inside the brain of living patients, generating quantitative data on over 4,000 identified synaptic proteins.
     AI/machine learning can decern patterns from complex data that humans and traditional statistics cannot. AI/ML can also handle highly dimensional datasets in sample and feature numbers (e.g., proteins, genes). Therefore, we decided to start combining AI and proteomics to explore our patients’ data. Here, we proceeded to unsupervised clustering classification using the Uniform Manifold Approximation and Projection (UMAP) algorithm for dimensionality reduction of six PSP datasets generated from schizophrenia patients and healthy siblings. Although we performed this preliminary study on a small number of patients, we demonstrated that AI is already able to recognize healthy and schizophrenia individuals from our synapse proteomics data. Moreover AI shows the reproducibility of quantitative proteomics data by how close are the projections of replicates.
We are now planning to expand the use of PSP and AI analyses to 50 new patients. We are hoping to understand in the near future disease ‘molecular directionality’ and ‘trends’ between patients. We think this new work will be a gateway toward the development of next-gen personalized and precision therapies against various brain disorders.


Behavioral and Neural Correlates of Two-stage Sleep in Cephalopods

Kazumichi Shimizu, Aditi Pophale, Tomoyuki Mano, Leenoy Meshulam, Sam Reiter


Human sleep can be divided into two stages, rapid eye movement (REM) and slow wave (SW) sleep, each with distinct behavioral and neural correlates as well as proposed functions. Two-stage sleep has been shown to be present in other mammals, as well as birds, reptiles, and fish. This suggests that it evolved very early in vertebrate evolution, has been maintained over the hundreds of millions of years separating these diverse animal groups, and therefore is of fundamental importance functionally. We recently found that octopuses, which evolved large brains and complex behaviors independently of the vertebrate lineage, also possess two stages of sleep (‘active’ and ‘passive’). Each stage has a range of behavioral and neural correlates which we are comparing with vertebrate REM and SW sleep. Octopus active sleep is characterized by the rapid transitioning through a series of brain-controlled skin patterns. Through high-resolution filming and computational analysis, we are attempting to relate octopus waking and sleeping skin patterns, and thus decode the evolving contents of octopus sleep. The possibility that two similar stages of sleep evolved convergently suggests a comparative approach may reveal general principles of sleep function.


Asymmetry in Representations of Semantic Symmetry in the Human Brain

Jiaxin Wang,  Kiichi Kawahata, Antoine Blanc, Shinji Nishimoto, Satoshi Nishida


Previous studies have reported that semantic representations associated with thousands of words are distributed across various regions in the human brain (Huth et al., 2012, Neuron; Huth et al., 2016, Nature; Nishida et al., 2021, PLOS Comput. Biol.). By using fMRI-based voxelwise models (Naselaris et al., 2011, NeuroImage), these studies have investigated brain representations based on the structure of semantic similarity between words. However, regarding properties other than semantic similarity, such as semantic symmetry, the brain representations of the corresponding information cannot be visualized using the existing semantic similarity-based models. Thus, it is still unclear how semantically symmetric information is represented in the brain. We addressed this issue by using voxelwise modeling to predict fMRI responses to naturalistic movie scenes from 30 semantic labels on the scenes. Each of the 30 labels, which had 15 symmetric pairs (e.g., “intelligent” and “stupid”, “urban” and “rural”, “amusing” and “gloomy”, etc.), was manually evaluated in each scene using the rating on 5-point scales of the label. We confirmed that time series of manual ratings on individual labels showed significantly negative correlation between most of symmetric pair. However, the weight of voxelwise models, which reflects the cortical representation of each  semantic  label,  did  not  exhibit  such a negative correlation even in the same symmetric pair. Further analyses provided a possible explanation for this discrepancy.  First, the distribution of voxels predictable by voxelwise models for each symmetric pair manifested little overlap between the symmetric pair (on average, 20.8\% of predictable voxels for any of the paired labels).  Second, the weight of voxelwise models mapped onto the cortex revealed that the positive/negative and linear/nonlinear coding of each symmetric pair was highly mixed and distributed across the cortex. These results indicate that the cortical representation of the symmetric pairs are highly distributed and heterogeneous, providing new insight into the cortical representation of semantically symmetric information.


Testing a hierarchy of neural processes in EEG from a three-phase inattentional blindness paradigm with a massive collection of

Yota Kawashima, Angus Leung, Naotsugu Tsuchiya


To understand the physical basis of consciousness, researchers search for the neural correlates of consciousness (NCCs). However, NCCs can be confounded with neural correlates of reports. To disentangle NCCs from confounds, Shafto & Pitts (2015, J. Neurosci.) developed a 3-phase inattentional blindness paradigm incorporating EEG recordings. In Phase 1 (P1), participants do a distractor task, during which target and control stimuli (face and non-face random stimuli) are presented. Here, participants are inattentionally blind to the target. Following queries about the target stimulus, they repeat the task in Phase 2 (P2). Due to the queries, they are aware of the target stimulus. Finally, in Phase 3 (P3), they perform a task on the target stimulus. This paradigm assumes a hierarchy of neural processes across 3 phases. Specifically, 1) P1 data captures unconscious processing on the target stimulus, 2) P2 data captures both the processes from P1 and conscious processing of the target stimulus, and 3) P3 data captures report-related processes in addition to the above two types of processes. Here, we test this critical assumption. We employ a toolbox for highly comparative time-series analysis (hctsa) that provides 7,702 univariate time-series features from various scientific fields such as physics, statistics, and neuroscience. We apply hctsa to the EEG data from Shafto & Pitts (2015). If features discriminate between face and non-face stimulus in P1, they should capture unconscious processes. Therefore, they should discriminate faces from non-face stimuli in P2 and P3 as well. Excluding such features, discriminative features in P2 should capture conscious processes, which should also discriminate them in P3. Finally, discriminative features in only P3 should capture report-related processes. To test this, we evaluate the discrimination performance of each feature in each phase. In our pilot results (N=1, 216 trials, 4 channels), we find many features to discriminate in both P2 and P3, which is consistent with the predictions. However, we also find some features to discriminate in both P1 and P3, but not P2, which should not occur based on the processing hierarchy assumption. Our tentative results raise a question about the hitherto widely-held assumption of the hierarchy of conscious and unconscious processing.