Gradient-based representations for handling anomalous inputs
Personnel: Jinsol Lee
Goal: Detecting anomalous inputs such as out-of-distribution or corrupted samples, or handling inputs of unknown classes
Challenges: Neural networks heavily rely on the implicit closed-world assumption that any given input during inference belongs to one or more of the classes in the training data. Limited to the knowns defined by training set, neural networks classify any input images to be among the known classes, even if given inputs are significantly different from training data.
Our Work: We utilize gradient-based representations in out-of-distribution detection and open-set recognition. Out-of-distribution detection concerns the detection of samples drawn far away from in-distribution/training samples. Open-set recognition concerns handling samples of unknown classes differently than those of known classes. Rather than solely relying on the learned features from a trained model, we utilize gradients to gain insights regarding the amount of adjustments to its parameters necessary to represent given inputs more accurately.
References:
J. Lee and G. AlRegib, "Open-Set Recognition with Gradient-Based Representations," in IEEE International Conference on Image Processing (ICIP), Anchorage, AK, Sep. 19-22 2021.
J. Lee and G. AlRegib, "Gradients as a Measure of Uncertainty in Neural Networks," in IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Oct. 2020.
J. Lee, M. Prabhushankar, and G. AlRegib, "Gradient-Based Adversarial and Out-of-Distribution Detection," in International Conference on Machine Learning (ICML) Workshop on New Frontiers in Adversarial Machine Learning, Baltimore, MD, Jul. 2022.