While neural networks are generalize well for complex applications, they are notorious for misinterpreting failure modes. Specifically, estimating uncertainty in predictions, producing accurate confidence values (calibration), and ultimately quantifying deployment risks remains an open research problem for academia and industry. At OLIVES, we view the problem from two angles. First, we expand current research efforts with novel paradigms to effectively estimate uncertainty, calibrate confidence scores, and analyze risk assessment. Second, we adapt our approaches to difficult settings (e.g. seismic interpretation) that face application-specific challenges.

1.Tackling Degradation of Machine Confidence by Simulating Human Label Uncertainty