Dear all, we have a new paper out where we have propagated errors forwards (and not backwards): "For an autonomous agent, the inputs are the sensory data that inform the agent of the state of the world, and the outputs are their actions, which act on the world and consequently produce new sensory inputs. The agent only knows of its own actions via their effect on future inputs; therefore desired states, and error signals, are most naturally defined in terms of the inputs. Most machine learning algorithms, however, operate in terms of desired outputs. For example, backpropagation takes target output values and propagates the corresponding error backwards through the network in order to change the weights. In closed loop settings, it is far more obvious how to define desired sensory inputs than desired actions, however. To train a deep network using errors defined in the input space would call for an algorithm that can propagate those errors forwards through the network, from input layer to output layer, in much the same way that activations are propagated. In this article, we present a novel learning algorithm which performs such ‘forward-propagation’ of errors. We demonstrate its performance, first in a simple line follower and then in a 1st person shooter game." https://journals.sagepub.com/doi/10.1177/1059712319851070 The PDF of the accepted submission version can be found on my personal homepage: https://www.berndporr.me.uk/Porr_Miller_FCL_2019_Adaptive_Behaviour.pdf https://www.berndporr.me.uk/publications.php The code is available here: https://github.com/glasgowneuro/feedforward_closedloop_learning A video clip of the learning behaviour of a simple FPS agent is here: https://www.youtube.com/watch?v=QLVDBdlAQLY Best, /Bernd Porr & Paul Miller -- www: http://www.tinnitustailor.tech http://www.attys.tech http://www.glasgowneuro.tech http://www.berndporr.me.uk Mobile: +44 (0)7840 340069