Yejin Choi
Brett Helsel Professor at the University of Washington and Senior Research Manager at AI2
Title: David V.S. Goliath: the Art of Leaderboarding in the Era of Extreme-Scale Neural Models
Keynote 2 Tuesday 20 September 08:30 KST at Grand Ballroom
Abstract
Scale appears to be the winning recipe in today's leaderboards. And yet, extreme-scale neural models are still brittle to make errors that are often nonsensical and even counterintuitive. In this talk, I will argue for the importance of knowledge, especially commonsense knowledge, as well as inference-time algorithms, and demonstrate how smaller models developed in academia can still have an edge over larger industry-scale models, if powered with knowledge or algorithms.
First, I will introduce "symbolic knowledge distillation", a new framework to distill larger neural language models into smaller commonsense models, which leads to a machine-authored KB that wins, for the first time, over a human-authored KB in all criteria: scale, accuracy, and diversity.
Next, I will highlight how we can make better lemonade out of neural language models by shifting our focus to unsupervised, inference-time algorithms. I will demonstrate how unsupervised models powered with algorithms can match or even outperform supervised approaches on hard reasoning tasks such as nonmonotonic reasoning (such as counterfactual and abductive reasoning), or complex language generation tasks that require logical constraints.
Finally, I will introduce a new (and experimental) conceptual framework, Delphi, toward machine norms and morality, so that the machine can learn to reason that “helping a friend” is generally a good thing to do, but “helping a friend spread fake news” is not.
BIO
Yejin Choi is Brett Helsel Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and a senior research manager at AI2 overseeing the project Mosaic. Her research investigates a wide range of problems including commonsense knowledge and reasoning, neuro-symbolic integration, multimodal representation learning, and AI for social good. She is a co-recipient of the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize in 2021, a NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the ICCV Marr Prize in 2013.