Search

Home > The Gradient Podcast > Christopher Manning: Linguistics and the Development of NLP
Podcast: The Gradient Podcast
Episode:

Christopher Manning: Linguistics and the Development of NLP

Category: Technology
Duration: 01:11:35
Publish Date: 2022-09-08 15:31:01
Description: In episode 41 of The Gradient Podcast, Daniel Bashir speaks to Christopher Manning.

Chris is the Director of the Stanford AI Lab and an Associate Director of the Stanford Human-Centered Artificial Intelligence Institute. Chris is an ACM Fellow, an AAAI Fellow, and past President of ACL. His work currently focuses on applying deep learning to natural language processing. His work has included tree recursive neural networks, GloVe, neural machine translation, and computational linguistic approaches to parsing, among other topics. 

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Outline:

(00:00) Intro

(02:40) Chris’s path to AI through computational linguistics

(06:10) Human language acquisition vs. ML systems

(09:20) Grounding language in the physical world, multimodality and DALL-E 2 vs. Imagen

(26:15) Chris’s Linguistics PhD, splitting time between Stanford and Xerox PARC, corpus-based empirical NLP

(34:45) Rationalist and Empiricist schools in linguistics, Chris’s work in 1990s

(45:30) GloVe and Attention-based Neural Machine Translation, global and local context in language

(50:30) Different Neural Architectures for Language, Chris’s work in the 2010s

(58:00) Large-scale Pretraining, learning to predict the next word helps you learn about the world

(1:00:00) mBERT’s Internal Representations vs. Universal Dependencies Taxonomy

(1:01:30) The Need for Inductive Priors for Language Systems

(1:05:55) Courage in Chris’s Research Career

(1:10:50) Outro (yes Daniel does have a new outro with ~ music ~)

Links:

Chris’s webpage

Papers (1990s-2000s)

Distributional Phrase Structure Induction

Fast exact inference with a factored model for Natural Language Parsing

Accurate Unlexicalized Parsing

Corpus-based induction of syntactic structure

Foundations of Statistical Natural Language Processing

Papers (2010s):

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

GloVe

Effective Approaches to Attention-based Neural Machine Translation

Stanford’s Graph-based Neural dependency parser

Papers (2020s)

Electra: Pre-training text encoders as discriminators rather than generators

Finding Universal Grammatical Relations in Multilingual BERT

Emergent linguistic structure in artificial neural networks trained by self-supervision

Get full access to The Gradient at thegradientpub.substack.com/subscribe
Total Play: 0