This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Friday, September 27, 2019 at 12:00pm to 1:30pm
Bloomberg Center, Room 301
Learning Machines Seminar Series
What: LMSS @ Cornell Tech: Sam Bowman (New York University)
When: Friday, September 27, 12:15 p.m. (lunch served at 12pm)
Where: Room 301, Bloomberg Center, Cornell Tech (map)
"Task-Independent Language Understanding"
This talk deals with the goal of task-independent language understanding: building machine learning models that can learn to do most of the hard work of language understanding before they see a single example of the language understanding task they're meant to solve, in service of making the best of modern NLP systems both better and more data-efficient. I'll survey the (dramatic!) progress that the NLP research community has made toward this goal in the last year. In particular, I'll dwell on GLUE and SuperGLUE—two open-ended shared task competitions that measure progress toward this goal for sentence understanding tasks—and I'll preview a few recent analysis papers that attempt to offer a bit of perspective on this progress.
Sam Bowman has been on the faculty at NYU since 2016, when he completed PhD with Chris Manning and Chris Potts at Stanford. At NYU, Sam is jointly appointed between the new school-level Center for Data Science, which focuses on machine learning, and the Department of Linguistics, and is also a co-PI of the CILVR machine learning lab and an affiliate member of the Courant Institute's Department of Computer Science. Sam's research focuses on data, evaluation techniques, and modeling techniques for sentence and paragraph understanding in natural language processing, and on applications of machine learning to scientific questions in linguistic syntax and semantics. Sam organized a twenty-three person research team at JSALT 2018 and received a 2015 EMNLP Best Resource Paper Award and a 2017 Google Faculty Research Award.