This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Friday, February 22, 2019 at 3:30pm
Spurred by recent advances, machine learning methods are beginning to prescribe decisions in high-stakes domains, including hiring and medical diagnoses. The standard machine learning paradigm that optimizes average-case performance, however, often yields models that perform poorly on tail subpopulations, such as underrepresented demographic groups. We present methods that optimize the worst-case subpopulation performance, thereby guaranteeing a uniform level of performance over all subpopulations. Our procedures are convex and computationally efficient. We prove finite-sample minimax upper and lower bounds, showing that uniform subpopulation performance comes at a cost in convergence rates. Empirically, our procedure improves performance on tail subpopulations, and provides a uniform level of performance through time by controlling latent minority proportions.
Hongseok Namkoong is a Ph.D. candidate in the Department of Management Science & Engineering at Stanford University, where he is jointly advised by John Duchi and Peter Glynn. His research focuses on developing robust and reliable machine learning methods that can maintain a uniform level of performance under operation. He received his M.S. in statistics from Stanford, and B.S. in industrial engineering and mathematics from KAIST. Hong is a recipient of a number of awards and fellowships, including best paper awards at the Neural Information Processing Systems conference and the International Conference on Machine Learning (runner-up), and the best student paper award from the INFORMS Applied Probability Society.