Introspective Learning in Neural Networks

 Personnel: Mohit Prabhushankar

Goal:
To differentiate the inference process in neural networks into two stages- a fast sensing stage and a slower but reliable reflection stage.

Challenges: Recognition in humans is a feed-forward process and occurs in less than 80ms. However, when the visual scene or its interpretation is complicated, humans are unsure of their inference and are reflective of their decisions. In machine vision, this complicated decision often arises due to domain shifted test data. Existing techniques overcome domain difference by augmenting additional data either in the input or in the representation stage. 

Our work: To overcome domain difference, we advocate for two stages in a neural network's decision making process. The first is the existing feed-forward inference framework where patterns in given data are sensed and associated with previously learned patterns. The second stage is a slower reflection stage where we ask the network to reflect on its feed-forward decision by considering and evaluating all available choices. Together, we term the two stages as introspective learning. We use gradients of trained neural networks as a measurement of this reflection. A simple three-layered Multi Layer Perceptron is used as the second stage that predicts based on all extracted gradient features. We perceptually visualize the post-hoc explanations from both stages to provide a visual grounding to introspection. For the application of recognition, we show that an introspective network is 4% more robust and 42% less prone to calibration errors when generalizing to noisy data. We also illustrate the value of introspective networks in downstream tasks that require generalizability and calibration including active learning, out-of-distribution detection, and uncertainty estimation. We ground the proposed machine introspection to human introspection for the application of image quality assessment.

References:

  1. M. Prabhushankar and G. AlRegib, "Introspective Learning : A Two-Stage Approach for Inference in Neural Networks" Advances in Neural Information Processing Systems (2022), Nov 29 - Dec 1, 2022.