This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Friday, March 9, 2018 at 3:30pm
Bayesian optimization methods are designed to optimize objective functions that take a long time or are expensive to evaluate. Such objective functions arise in aerospace engineering, hyperparameter tuning of deep neural networks, materials science, and A/B-testing-based design of mobile apps and online marketplaces. These methods use Gaussian process regression from machine learning to build a surrogate for the objective and value of information analysis from Bayesian decision theory to select points at which to evaluate. The now well-established expected improvement method performs well under the standard problem structure assumed in Bayesian optimization: that we evaluate a noise-free objective one point at a time, subject to a constraint on the number of evaluations. This method, however, does not generalize easily to many more exotic problems not fitting this form. After introducing the area, we discuss new methods for problems with exotic problem structure: optimization of integrals and sums of expensive-to-evaluate integrands; optimization with multiple fidelities and information sources; optimization with derivatives; and greybox optimization.
Peter Frazier is an Associate Professor at Cornell ORIE. He works at the intersection of operations research and machine learning, on Bayesian optimization, multi-armed bandits, Bayesian sequential experimental design, and optimization via simulation. He is also a Staff Data Scientist at Uber, where he works on pricing and marketplace design. He received a Ph.D. in Operations Research and Financial Engineering from Princeton University in 2009. He is the recipient of an AFOSR Young Investigator Award and an NSF CAREER Award, and is an associate editor for Operations Research, ACM TOMACS, and IISE Transactions.