Cornell University

This is a past event. Its details are archived for historical purposes.

The contact information may no longer be valid.

Please visit our current events listings to look for similar events by title, location, or venue.

CAM Colloquium: Madeleine Udell (ORIE) - Low memory convex optimization

Friday, February 8, 2019 at 3:30pm

Frank H. T. Rhodes Hall, 655

Yousef Saad, originally scheduled to give a talk today, is unfortunately unable to visit due to a flight cancelation. Madeleine Udell has kindly agreed last-minute to give a talk. Thank you, Madeleine!

Abstract:
Convex optimization provides robust convergence in theory and practice, provable certificates of optimality, and elegant analytical tools. Yet for several classes of large scale problems, including low rank matrix optimization and submodular optimization, convex optimization is widely considered impractical due to high memory requirements. In this talk, we discuss a few important problems in convex optimization, and new low-memory (or even optimal-memory) solution methods. These methods draw on elegant ideas in convex optimization and linear algebra including duality, complementary slackness, and random projections, and build on classical algorithms such as Kelley's cutting plane method for unconstrained minimization and the Frank Wolfe method for smooth minimization on a polytope. These practical convex methods deliver all the benefits of convex optimization without burdensome memory requirements.


Bio:
Madeleine Udell is Assistant Professor of Operations Research and Information Engineering and Richard and Sybil Smith Sesquicentennial Fellow at Cornell University. She studies optimization and machine learning for large scale data analysis and control, with applications in marketing, demographic modeling, medical informatics, engineering system design, and automated machine learning. Her research in optimization centers on detecting and exploiting novel structures in optimization problems, with a particular focus on convex and low rank problems. These structures lead the way to automatic proofs of optimality, better complexity guarantees, and faster, more memory-efficient algorithms.

 

Madeleine completed her PhD at Stanford University in Computational & Mathematical Engineering in 2015 under the supervision of Stephen Boyd, and a one year postdoctoral fellowship at Caltech in the Center for the Mathematics of Information hosted by Professor Joel Tropp. She received a B.S. degree in Mathematics and Physics, summa cum laude, with honors in mathematics and in physics, from Yale University.

Subscribe
Google Calendar iCal Outlook

Recent Activity