Despite the remarkable performance of neural networks in various fields, the viability of these models at deployment remains a concern. Knowing when a model’s prediction can be trusted is critical for deployment. Neural networks behave unpredictably under unseen challenging conditions, and are susceptible to distribution shift during inference. These challenges lead to degradation in the model’s generalizability, which is reflected in multiple applications including adversarial and noise robustness, domain generalization, calibration, active learning and out-of-distribution detection. At OLIVES, we develop novel techniques to enhance generalizability. In addition to generalizability, interpretability of neural networks is essential in garnering trust from humans. In our group, we show contextual and relevant explanations for the model’s decisions by asking counterfactual and contrastive questions.

1.Explanatory Paradigms in Neural Networks - SPM work