Tackling Degradation of Machine Confidence by Simulating Human Label Uncertainty

 Personnel: Chen Zhou, Mohit Prabhushankar, Ghassan Alregib 

Goal: To alleviate the deterioration of uncertainty quantification of neural networks when trained with noisy labels via label dilution.

Challenges:  Increasing-to-balancing label noise rates can improve the model's generalization. Nevertheless, increasing the label noise rates impairs the capability of uncertainty quantification of models. Instead of synthesizing label noise, utilizing human uncertain labels alleviates the degradation of machine confidence. However, collecting human annotations can be challenging, i.e., collecting decisions from multiple independent annotators is usually inconvenient  and expensive. This motivates objective approaches to simulate human label uncertainty.

Our Work: In this work, we show that simulating human uncertain labels during training alleviates the degradation of neural network confidence caused by random label noise. We first show the degradation of the model’s performance of uncertainty quantification with the presence of training label noise. We empirically find that the confidence degradation of models can be alleviated by human uncertain labels. In order to avoid challenging subjective human label collection, we aim to simulate human label uncertainty in an objective manner. We develop a framework to simulate human label uncertainty. Specifically, we first

measure the uncertainty of samples via natural scene statistics (NSS) that relate to human perception. The uncertain samples are then associated with multiple labels during traininig. We demonstrate that, 1) the influence of NSS-oriented uncertain labels can not be replicated by dilution of random noisy labels; 2) NSS-oriented label uncertainty, without model pre-training, achieves comparable uncertainty quantification performance compared to machine scene statistics (MSS)-oriented label uncertainty.

References: 

  1. M. Prabhushankar, and G. AlRegib, "Introspective Learning : A Two-Stage Approach for Inference in Neural Networks," in Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA, Nov. 29 - Dec. 1 2022. [PDF][Code]

  2. R. Benkert, M. Prabhushankar, and G. AlRegib, "Reliable Uncertainty Estimation for Seismic Interpretation with Prediction Switches," in International Meeting for Applied Geoscience & Energy (IMAGE), Houston, TX, Aug. 28-Sept. 1 2022.

  3. J. Lee and G. AlRegib, "Gradients as a Measure of Uncertainty in Neural Networks," in IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Oct. 2020. [PDF][Video]

  4. D. Temel, M. Prabhushankar, and G. AlRegib, "UNIQUE: Unsupervised Image Quality Estimation," in IEEE Signal Processing Letters , vol. 23, no. 10, pp. 1414-1418, Oct. 2016. [PDF][Code][Link]

  5. M. Prabhushankar, D. Temel, and G. AlRegib, "MS-UNIQUE: Multi-Model and Sharpness-Weighted Unsupervised Image Quality Estimation," in Image Quality and System Performance XIV, part of IS&T Electronic Imaging, San Francisco, CA, Jan. 29 2017. [PDF][Code]