Stochastic Surprisal in Neural Networks

 Personnel: Mohit Prabhushankar

Goal: To allow actions during inference within trained neural networks.

Challenges: Free-energy principle conjectures that surprise in any scenario can only be minimized by actions taken by humans. For instance, when presented with a 3D object that we are not familiar with, we move ourselves to find a better viewing angle that allows us to recognize the object. We conjecture that machine learning algorithms that are trained on pristine images, when presented with noisy images, suffer from the same phenomenon. However, what does taking action mean in trained neural networks and how do they take them?

Our Work: This work conjectures and validates a framework that allows for action during inference in supervised neural networks. Supervised neural networks are constructed with the objective to maximize their performance metric in any given task. This is done by reducing free-energy during training. However, the bottom-up inference nature of supervised networks is a passive process that renders them fallible to noise. In this work, we provide a thorough background of supervised neural networks, both generative and discriminative, and discuss their functionality from the perspective of the free-energy principle. We then provide a framework for introducing action during inference. We introduce a new measurement called stochastic surprisal that is a parameter gradient and is hence a function of the network, the input, and an action. Stochastic surprisal is validated on two applications: Image Quality Assessment and Recognition under noisy conditions. We show that, while noise characteristics are ignored to make robust recognition, they are analyzed to estimate image quality scores. We apply stochastic surprisal on two applications, three datasets, and as a plug-in on twelve networks. In all, it provides a statistically significant increase among all measures. We also discuss the implications of stochastic surprisal in other areas of cognitive psychology including expectancy-mismatch and abductive reasoning.

References:

  1. M. Prabhushankar and G. AlRegib, "Stochastic Surprisal: An Inferential Measurement of Free Energy in Neural Networks," Frontiers in Neuroscience – Perception Science, submitted on Apr. 22 2022.