Dartmouth Events

Local vs Global Structures in Machine Learning Generalization

In this talk, I will present several recent results on “generalization metrics” to measure ML models.

Wednesday, March 2, 2022
11:30am – 12:30pm
Zoom - contact Susan Cable
Intended Audience(s): Public
Categories: Lectures & Seminars

Abstract. Machine learning (ML) models are increasingly being deployed in safety-critical applications, making their generalization and reliability a problem of urgent societal importance. To date, our understanding of ML is still limited because (i) the narrow problem settings considered in studies and the (often) cherry-picked results lead to incomplete/conflicting conclusions on the failures of ML; (ii) focusing on low-dimensional intuitions results in a limited understanding of the global structure of ML problems. In this talk, I will present several recent results on “generalization metrics” to measure ML models. I will show that (i) generalization metrics such as the connectivity between local minima can quantify global structures of optimization loss landscapes, which can lead to more accurate predictions on test performance than existing metrics; (ii) carefully measuring and characterizing the different phases of loss landscape structures in ML can provide a more complete picture of generalization. Specifically, I show that different phases of learning require different ways to address failures in generalization. Furthermore, most conventional generalization metrics focus on the so-called generalization gap, which is indirect and of limited practical value. I will discuss novel metrics referred to as “shape metrics” that allow us to predict test accuracy directly instead of the generalization gap. I also show that one can use shape metrics to achieve improved compression and out-of-distribution robustness of ML models. I will discuss theoretical results and present large-scale empirical analyses for different quantity/quality of data, different model architectures, and different optimization hyperparameter settings to provide a comprehensive picture of ML generalization. I will also discuss practical applications of utilizing these generalization metrics to improve ML models’ training, efficiency, and robustness.


Bio. Yaoqing Yang is a postdoctoral researcher at the RISE Lab at UC Berkeley. He received his PhD from Carnegie Mellon University and B.S. from Tsinghua University, China. He is currently focusing on machine learning, and his main contributions to machine learning are towards improving reliability and generalization in the face of uncertainty, both in the data and the compute platform. His PhD thesis laid the foundation for an exciting field of research -- coded computing -- where information-theoretic techniques are developed to address unreliability in computing platforms. His works have won the best paper finalist at ICDCS and have been published multiple times in NeurIPS, IEEE Transactions on Information Theory and ISIT. He has worked as a research intern at Microsoft, MERL and Bell Labs, and two of his joint CVPR papers with MERL have both received more than 300 citations. He is also the recipient of the 2015 John and Claire Bertucci Fellowship.

For more information, contact:
Susan Cable

Events are free and open to the public unless otherwise noted.