During the spring semester, in-person concerts, events and lectures that involve outside guests will not be held, per the university’s COVID-19 travel and visitor policy.
This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Friday, January 31, 2020 at 12:00pm to 1:30pm
Cornell Tech, Bloomberg Center, Room 165
Learning Machines Seminar Series
What: LMSS @ Cornell Tech: Danqi Chen (Princeton University)
When: Friday, January 31st, 12:15 p.m. (lunch served at 12pm)
Where: Room 165, Bloomberg Center, Cornell Tech (map)
"Advancing Textual Question Answering"
In this talk, I will discuss my recent work on advancing textual question answering: enabling machines to answer questions based on a passage of text, and more realistically, on a very large collection of documents (aka. “machine reading at scale”). In the first part, I will examine the importance of pre-trained language representations (e.g., BERT, RoBERTa) on the state-of-the-art QA systems. In particular, I will introduce a span-based pre-training method which is designed to better represent and predict spans of text and demonstrates superior performance on a wide range of QA tasks. Although these models already matched or surpassed human performance on some standard benchmarks, there still remains a huge gap when they are scaled up to the open-domain setting. In the second part, I will present two new directions: one is to replace traditional keyword-based retrieval component with fully dense embeddings for passage retrieval and the other is to answer questions based on a structured graph of text passages. Both approaches demonstrate promises for our future textual QA systems.
Danqi Chen is an Assistant Professor of Computer Science at Princeton University and co-leads the Princeton NLP Group. Danqi’s research focuses on deep learning for natural language processing, with an emphasis on the intersection between text understanding and knowledge representation/reasoning and applications such as question answering and information extraction. Before joining Princeton, Danqi worked as a visiting scientist at Facebook AI Research in Seattle. She received her Ph.D. from Stanford University (2018) and B.E. from Tsinghua University (2012), both in Computer Science. In the past, she was a recipient of a Facebook Fellowship, and a Microsoft Research Women’s Fellowship and paper awards at ACL’16 and EMNLP’17.