- Undergraduate
- Graduate
- Research
- News & Events
- People
- Inclusivity
- Jobs
Back to Top Nav
Back to Top Nav
Back to Top Nav
In this talk, Adam will describe a new parallel algorithm called Fast Adaptive Sequencing Technique or maximizing a monotone submodular function under a cardinality constraint k.
Abstract
Across machine learning, social network analysis, and algorithms, many fundamental objectives we care to optimize are submodular, such as influence, innovation diffusion, clustering, mutual information, feature selection, and data summarization. Across these domains, typical applications include problem instances defined on massive data sets. However, maximizing a submodular function subject to a cardinality constraint is NP-hard, and current state-of-the-art serial algorithms can obtain an optimal approximation only after a number of queries that is linearly increasing in the size of the data.
Recently, there has been a great deal of research on parallel algorithms whose theoretical runtime on an idealized parallel machine is exponentially faster than algorithms used for submodular maximization over the past 40 years. However, it is computationally infeasible to use these algorithms in practice even on small data sets because the non-asymptotic number of iterations and queries they require depend on large constants and high-degree polynomials in terms of the precision and confidence of their approximations.
In this talk, I will describe a new parallel algorithm called Fast Adaptive Sequencing Technique (FAST) for maximizing a monotone submodular function under a cardinality constraint k. The design principles behind the FAST algorithm are a significant departure from those of recent theoretically fast algorithms. These design principles yield theoretic guarantees that are competitive with recent theoretically fast algorithms, but practical performance that is orders of magnitude faster than state-of-the-art algorithms used in practice. Specifically, FAST obtains an approximation that is arbitrarily close to the optimal 1 - 1/e guarantee in an asymptotic parallel runtime (adaptivity) that is O(log(n) log^2(log k)) using O(n log log(k)) total queries. In terms of practical parallel runtimes, we show that FAST is orders of magnitude faster than any known algorithm for submodular maximization, including hyper-optimized parallel versions of state-of-the-art serial algorithms, by running experiments on large data sets.
Bio
Adam Breuer is Harvard’s first double PhD candidate in the departments of Government and Computer Science. His political science dissertation focuses on how algorithms and new technologies are reshaping collective political behaviors via social media, social networks, and information/disinformation. His computer science dissertation (defended November, 2020) designs new algorithms for machine learning applications such as large-scale social network analysis, data summarization, and feature selection. Parts of this dissertation have been published in top AI/ML venues such as ICML and NeurIPS/NIPS.
Adam is also an academic collaborator with Facebook Research, where he designs machine learning algorithms to help detect billions of fake social media accounts.
Adam’s research is driven by the following questions: How can we leverage algorithms, AI, and the data revolution to learn about fundamental political and societal problems; how are these new technologies changing political and economic behaviors such as democratic elections and protests; and how can we design better algorithms with these problems in mind?
His research is supported by the NSF Graduate Research Fellowship, the NSF Dissertation Award in Political Science, the Harvard Computer Science Research Fellowship, and Facebook AI.
Events are free and open to the public unless otherwise noted.