This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Friday, February 8, 2019 at 12:00pm to 1:30pm
Learning Machines Seminar Series
What: LMSS: Percy Liang (Stanford)
When: Friday, Feb 8, 12:15 p.m. (pizza served at 12pm)
Where: Room 165, Bloomberg Center, Cornell Tech (map)
"Can Language Robustify Learning?"
Machine learning is facing a robustness crisis. Despite reaching human-level performance on a wide range of benchmarks, state-of-the-art systems can be easily fooled by seemingly small perturbations that don't affect humans. The problem is perhaps fundamental to learning: fitting huge low bias models can easily overfit the superficial statistics of the benchmarks. In this talk, we explore natural language as a way to address this problem by providing stronger supervision: instead of requiring the machine learning algorithm to infer a function from raw examples, we specify the function directly using language. We present two initial forays into this space: First, we convert natural language explanations to a function that labels unlabeled data, which can be used to train a predictor. Second, users interactively teach high-level concepts using natural language definitions.
Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).