- Undergraduate
- Graduate
- Research
- News & Events
- People
- Inclusivity
- Jobs
Back to Top Nav
Back to Top Nav
Back to Top Nav
Xue will describe a line of work on designing robust algorithms with provable guarantees for learning signals that have sparse representations in the Fourier domain.
Abstract: A fundamental goal in machine learning is to find succinct explanations for large volumes of data. A popular paradigm is to posit a probabilistic model, and infer the best set of parameters that fits the given data. However, this approach is known to be often brittle to noise, and is not robust to errors and corruptions of various kinds. There is a large body of work proposing practical methods to make algorithms robust. On the other hand, we have very little theoretical understanding of when and how one can design robust algorithms for learning. In this talk, I will describe a line of work on designing robust algorithms with provable guarantees for learning signals that have sparse representations in the Fourier domain, and present several connections to other well-studied problems in learning theory.
Bio: Xue Chen is broadly interested in randomized algorithms and the use of randomness in computation. Specific areas include big data algorithms for the Fourier transform and sparse recovery, foundations of machine learning, derandomization and pseudorandomness. He obtained his Ph.D. at the University of Texas at Austin, under the supervision of David Zuckerman. Currently, he is a postdoctoral fellow in Northwestern University.
Events are free and open to the public unless otherwise noted.