Josh Tenenbaum WEBSITE
Massachusetts Institute of Technology
Building Machines That See, Think and Learn Like People
Recent advances in deep learning with neural networks have transformed both AI technologies and computational neuroscience, and sparked new excitement in connecting the science and engineering of intelligence. But while deep neural networks excel primarily at pattern recognition and function approximation, human intelligence goes much deeper. I will talk about prospects and approaches for reverse-engineering intelligence based on the idea of modeling the world: the computations, algorithms, data structures and neural mechanisms that enable our minds to explain and understand what we see, or imagine things we have never seen; to set goals, make plans and solve problems as we try to make those real; and to build new models as we learn more about the world, learning from both our successes as well as our failures.
I will focus on human capacities for common-sense scene understanding — perceiving the world in terms of physical objects, intentional agents, and their causal interactions — and learning the generative models that support these inferences. I will introduce some of the technical concepts based on probabilistic programs, simulation engines for intuitive physics and intuitive psychology, and Bayesian program induction, that we use to model these cognitive capacities, and some of the experimental methods we use to test these models behaviorally. I will also talk about how these models can connect to and build on the recent neural network revolution: the role of deep learning systems in making inference and learning more efficient, as part of an integrated neuro-symbolic-probabilistic modeling and inference toolkit, and first steps in relating the models we have built to brain systems in human and non-human primates
Josh Tenenbaum is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Center for Brains, Minds and Machines (CBMM). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002 before returning to MIT. His long-term goal is to reverse-engineer how intelligence arises in the human mind and brain, and draw on these insights to engineer more human-like machine intelligence. In cognitive science, he is best known for developing accounts of cognition as probabilistic inference in structured generative models, and applying this approach to the study of human concept learning, causal reasoning, language acquisition, visual perception, intuitive physics, and theory of mind. In AI, he and his group have developed widely used models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian unsupervised learning and structure discovery. His current work focuses on commonsense visual scene understanding and its neural basis, the development of common sense in infants and young children, and models of learning as program induction. He and his students’ work have received best paper or best student paper awards or honorable mentions at leading conferences in Cognitive Science, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, Robotics, and the Society for Philosophy and Psychology. He is the recipient of the Distinguished Scientific Award for Early Career Contribution to Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2012), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2015), the R&D Magazine Innovator of the Year (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences.