Dartmouth Events

Leveraging Human Input to Enable Robust AI Systems

In this talk I will discuss recent progress towards using human input to enable safe and robust autonomous systems.

2/18/2022
11:30 am – 12:30 pm
Zoom - contact Susan Cable
Intended Audience(s): Public
Categories: Lectures & Seminars

Abstract:
In this talk I will discuss recent progress towards using human input to enable safe and robust autonomous systems. Much work on robust machine learning and control seeks to be resilient to, or completely remove the need for, human input. By contrast, my research seeks to directly and efficiently incorporate human input into the study of robust AI systems. One problem that arises when robots and other AI systems learn from human input is that there is often a large amount of uncertainty over the human’s true intent and the corresponding desired robot behavior. To address this problem, I will discuss prior and ongoing research along three main topics: (1) how to enable AI systems to efficiently and accurately maintain uncertainty over human intent, (2) how to generate risk-averse behaviors that are robust to this uncertainty, and (3) how robots and other AI systems can efficiently query for additional human input to actively reduce uncertainty and improve their performance. My talk will conclude with a discussion of my long-term vision for safe and robust AI systems, including learning from multi-modal human input, interpretable and verifiable robustness, and developing techniques for human-in-the-loop robust machine learning that generalize beyond reward function uncertainty.

Bio:
Daniel Brown is a postdoctoral scholar at UC Berkeley, advised by Anca Dragan and Ken Goldberg. His research focuses on safe and robust AI systems, with an emphasis on robot learning, human-robot interaction, and value alignment of AI systems. He evaluates his research across a range of applications, including autonomous driving, service robotics, and dexterous manipulation. Daniel received his Ph.D. in 2020 in computer science from the University of Texas at Austin. Prior to starting his PhD, Daniel was a research scientist at the Air Force Research Lab's Information Directorate where he studied bio-inspired swarms and multi-agent systems. Daniel’s research has been nominated for two best-paper awards and he was selected in 2021 as a Robotics: Science and Systems Pioneer.

For more information, contact:
Susan Cable

Events are free and open to the public unless otherwise noted.