NC unit secretary
2020-10-06 15:19:00

This is a discussion channel for “Active inference and artificial curiosity” by Prof. Karl Friston (University College London). A link to the talk is the following. Please do not share the link to anyone outside of this slack workspace. Access Passcode can be found at the announcement channel.

URLhttps://vimeo.com/471275612 (25 minutes)

✅ Kan Sakamoto, Hiroshi Yamakawa
👏 Tomasz Rutkowski, Taiki Miyagawa
Jafphd
2020-10-11 01:17:06

Thank you for your very interesting talk. I have a couple of questions: 1) I saw sometimes free energy (F) is defined based on accuracy minus complexity (or even evidence minus bound), but other times F is energy minus entropy. Could you please let me know what is the difference between these different definitions?; 2) The minimization of free energy works well in awake perception, but seems has some unclear issues during sleep states. We can think there are a change of entropy when the brain transits from awake to sleep state, and because that you can have a change of energy measurable in function of how active/inactive are the neurons. It is unclear though how the sleeping brain can make a probability distribution of the perceived wold and compare to another probability distribution of the internal model during sleep state. Thus my question is: can we understand the free energy principle in terms of how active the brain is during sleep rather than in terms of a probability distribution of the "dreamed state of the world"?

Pau
2020-10-11 04:53:30

Thank you for the talk, it is interesting to see such a wide range of interpretations. I have a couple of questions: • What predictions can we draw from it? I am particularly wondering how to avoid overgeneralizing, thus an alternative formulation might be what behaviour/action could not follow the free energy principle? • When citing the Landauer principle, is it meant as an analogy or as the actual physical principle? If taken in the literal sense, the orders of magnitude of the energy involved are minuscule, so how can/could we measure it? And even if possible, wouldn't the cost of running the electrical circuit at non-fully equilibrium (Wolpert, 2019) be too large to be drawn out?

Hiroaki Gomi
2020-10-11 15:57:02

Thank you very much for very stimulating talk about the unified principle of the brain computation. This unified scheme is quite attractive for me to understand the multiple aspects of the brain processing. According to your theory, all human brain computation seems to be described by this theory in some sense. 

So is it possible or helpful to create AI-robot, which can obey many rules defined by the human society, by using this theory? The optimal control theory seems able to explicitly design the rules as cost functions, but how can we design those rules explicitly in the free-energy principle? Should we introduce artificial priors to restrict some policy selection ?

As a second question, I would like to ask about the interpretation of prediction error in the motor control. In some article, you suggested that motor command should be interpreted as a prediction, rather than command and ascending signal should be interpreted as a prediction error. This notion is quite puzzling for me because the motor command should be coded considering the dynamics of limb and environments.  Motor command should be changed for different dynamic environments even if movement trajectory is identical. I do not deny the prediction coding. But, I think that this kind of 'prediction' should be done in the more higher brain processing and transformation from prediction to the motor command should be done (even partially) before sending the motor command to the muscle according to the dynamics of the limb and environments. It is quite helpful to understand your theory if you can give rational explanations.

Romuald Janik
2020-10-11 16:19:36

Thanks you for the talk! I would like to ask a couple of (very elementary) questions on the Free Energy Principle (FEP) - in order to better understand the general context:

  1. At what level of granularity would you consider the FEP to be a good description of brain computations?
  2. How complex would be the relevant priors in the FEP as applied to the brain? What is their origin? Should they be completely fixed or should they be considered to be learnable/modifiable in some way - e.g. in some longer timescale?
  3. The Variational Autoencoder VAE optimizes ELBO and thus realizes the Free Energy Principle (for perception) with a simple independent Gaussian prior. Would you see a place for similar models in neuroscience contexts?
Naoto Yoshida
2020-10-11 22:47:47

Thank you very much for a very inspiring talk. I have a question, maybe related to yesterday's discussion. Do you think that the free energy principle is one of the realizations of intelligence with a single unified objective? Or, do you assume some limitations on the application of this theory?