This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Thursday, February 14, 2019 at 3:00pm
Many methods in optimization are inherently sequential and consequently cannot be efficiently parallelized. In this talk, I will describe a novel approach called adaptive sampling that yields algorithms whose parallel running time is exponentially faster than any previous algorithm for submodular optimization. Maximizing a submodular function is the algorithmic engine behind a growing number of machine learning applications such as speech and document summarization, recommendation systems, clustering, feature selection, and network analysis.
The speedups are in the adaptive complexity model, which is an information theoretic abstraction for parallelism. The concept of adaptivity plays an important role in a broader agenda of understanding the limitations and possibilities of data-driven decision making. In this talk I will first introduce adaptivity, then describe the adaptive sampling algorithms, and finally present experimental results from various application domains.
Eric Balkanski is a fifth-year Ph.D. student in computer science at Harvard University where he is advised by Yaron Singer. Before his Ph.D., he received his B.S. from Carnegie Mellon University. He is the recipient of a Google Ph.D. fellowship, a Smith Family Graduate Science and Engineering Fellowship, a Best Paper Award at the 18th Conference on Implementation and Application of Automata in 2013, and is also an Andrew Carnegie Society Scholar. His research interests include machine learning, optimization, algorithms, networks, and mechanism design.