Search

Home > The Gradient Podcast > Joel Lehman: Open-Endedness and Evolution through Large Models
Podcast: The Gradient Podcast
Episode:

Joel Lehman: Open-Endedness and Evolution through Large Models

Category: Technology
Duration: 01:38:53
Publish Date: 2022-09-22 15:30:24
Description: Have suggestions for future podcast guests (or other feedback)? Let us know here!

In episode 44 of The Gradient Podcast, Daniel Bashir speaks to Joel Lehman.

Joel is a machine learning scientist interested in AI safety, reinforcement learning, and creative open-ended search algorithms. Joel has spent time at Uber AI Labs and OpenAI and is the co-author of the book Why Greatness Cannot be Planned: The Myth of the Objective

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Outline:

(00:00) Intro

(01:40) From game development to AI

(03:20) Why evolutionary algorithms

(10:00) Abandoning Objectives: Evolution Through the Search for Novelty Alone

(24:10) Measuring a desired behavior post-hoc vs optimizing for that behavior

(27:30) Neuroevolution through Augmenting Topologies (NEAT), Evolving a Diversity of Virtual Creatures

(35:00) Humans are an inefficient solution to evolution’s objectives

(47:30) Is embodiment required for understanding? Today’s LLMs as practical thought experiments in disembodied understanding

(51:15) Evolution through Large Models (ELM)

(1:01:07) ELM: Quality Diversity Algorithms, MAP-Elites, bootstrapping training data

(1:05:25) Dimensions of Diversity in MAP-Elites, what is “interesting”?

(1:12:30) ELM: Fine-tuning the language model

(1:18:00) Results of invention in ELM, complexity in creatures

(1:20:20) Future work building on ELM, key challenges in open-endedness

(1:24:30) How Joel’s research affects his approach to life and work

(1:28:30) Balancing novelty and exploitation in work

(1:34:10) Intense competition in AI, Joel’s advice for people considering ML research

(1:38:45) Daniel isn’t the worst interviewer ever

(1:38:50) Outro

Links:

Joel’s webpage

Evolution through Large Models: The Tweet

Papers:

Abandoning Objectives: Evolution through the search for novelty alone

Evolving a diversity of virtual creatures through novelty search and local competition

Designing neural networks through neuroevolution

Evolution through Large Models

Resources for (aspiring) ML researchers!

Cohere for AI

ML Collective

Get full access to The Gradient at thegradientpub.substack.com/subscribe
Total Play: 0