Dartmouth Events

Uncertainty and information for ML-driven decision making

I propose algorithms to learn indistinguishable probabilities, and show that they provably enable accurate risk assessment and better decision outcomes.

2/28/2022
11:30 am – 12:30 pm
Zoom - contact Susan Cable
Intended Audience(s): Public
Categories: Lectures & Seminars

 

Abstract: Prediction models should know what they do not know if they are to be trusted for making important decisions. Prediction models would accurately capture their uncertainty if they could predict the true probability of the outcome of interest, such as the true probability of a patient's illness given the symptoms. While outputting these probabilities exactly is impossible in most cases, I show that it is surprisingly possible to learn probabilities that are “indistinguishable” from the true probabilities for large classes of decision making tasks. I propose algorithms to learn indistinguishable probabilities, and show that they provably enable accurate risk assessment and better decision outcomes. In addition to learning probabilities that capture uncertainty, my talk will also discuss how to acquire information to reduce uncertainty in ways that optimally improve decision making. Empirically, these methods lead to prediction models that enable better and more confident decision making in applications such as medical diagnosis and policy making. 

Bio: Shengjia is a PhD candidate at the Department of Computer Science at Stanford University. His research interests include probabilistic deep learning, uncertainty quantification, experimental design, and ML for science. 

 

For more information, contact:
Susan Cable

Events are free and open to the public unless otherwise noted.