Self-Supervised Learning, Yann LeCun, Facebook AI Research

Dartmouth Events

Self-Supervised Learning, Yann LeCun, Facebook AI Research

After a brief presentation of the state of the art in deep learning, some promising principles & methods for self-supervised learning will be discussed.

Wednesday, October 24, 2018
3:30pm-4:30pm
Carpenter 013
Intended Audience(s): Public
Categories: Lectures & Seminars

Deep learning has enabled significant progress in computer perception, natural language understanding and control. But almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations, or model-free reinforcement learning, where the machine learn actions to maximize rewards. Supervised learning requires a large number of labeled samples, making it practical only for certain tasks. Reinforcement learning requires a very large number of interactions with the environment (and many failures) to learn even simple tasks.  In contrast, animals and humans seem to learn vast amounts of task-independent knowledge about how the world works through mere observation and occasional interactions. Learning new tasks or skills require very few samples or interactions with the world: we learn to drive and fly planes in about 30 hours of practice with no fatal failures. What learning paradigm do humans and animal use to learn so efficiently? I will propose the hypothesis that self-supervised learning of predictive world models is an essential missing ingredient of current approaches to AI. With such models, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence.  Good predictive models may be the basis of intuition, reasoning and "common sense", allowing us to fill in missing information: predicting the future from the past and present, or inferring the state of the world from noisy percepts.

BioYann LeCun is Director of AI Research at Facebook, and Silver Professor of Dara Science, Computer Science, Neural Science, and Electrical Engineering at New York University, affiliated with the NYU Center for Data Science, the Courant Institute of Mathematical Science, the Center for Neural Science, and the Electrical and Computer Engineering Department.

His current interests include AI, machine learning, computer perception, mobile robotics, and computational neuroscience. He has published over 180 technical papers and book chapters on these topics as well as on neural networks, handwriting recognition, image processing and compression, and on dedicated circuits and architectures for computer perception. The character recognition technology he developed at Bell Labs is used by several banks around the world to read checks and was reading between 10 and 20% of all the checks in the US in the early 2000s. His image compression technology, called DjVu, is used by hundreds of web sites and publishers and millions of users to access scanned documents on the Web. Since the late 80's he has been working on deep learning methods, particularly the convolutional network model, which is the basis of many products and services deployed by companies such as Facebook, Google, Microsoft, Baidu, IBM, NEC, AT&T and others for image and video understanding, document recognition, human-computer interaction, and speech recognition.

LeCun has been on the editorial board of IJCV, IEEE PAMI, and IEEE Trans. Neural Networks, was program chair of CVPR'06, and is chair of ICLR 2013 and 2014. He is on the science advisory board of Institute for Pure and Applied Mathematics, and has advised many large and small companies about machine learning technology, including several startups he co-founded. He is the lead faculty at NYU for the Moore-Sloan Data Science Environment, a $36M initiative in collaboration with UC Berkeley and University of Washington to develop data-driven methods in the sciences. He is the recipient of the 2014 IEEE Neural Network Pioneer Award.

For more information, contact:
Sandra Hall

Events are free and open to the public unless otherwise noted.