Search

Home > Machine Learning Street Talk > Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction
Podcast: Machine Learning Street Talk
Episode:

Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction

Category: Technology
Duration: 01:49:36
Publish Date: 2023-04-10 18:07:39
Description:

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/ESrGqhf5CB

Twitter: https://twitter.com/MLStreetTalk


Chris Eliasmith is a renowned interdisciplinary researcher, author, and professor at the University of Waterloo, where he holds the prestigious Canada Research Chair in Theoretical Neuroscience. As the Founding Director of the Centre for Theoretical Neuroscience, Eliasmith leads the Computational Neuroscience Research Group in exploring the mysteries of the brain and its complex functions. His groundbreaking work, including the Neural Engineering Framework, Neural Engineering Objects software environment, and the Semantic Pointer Architecture, has led to the development of Spaun, the most advanced functional brain simulation to date. Among his numerous achievements, Eliasmith has received the 2015 NSERC "Polany-ee" Award and authored two influential books, "How to Build a Brain" and "Neural Engineering."


Chris' homepage:

http://arts.uwaterloo.ca/~celiasmi/


Interviewers: Dr. Tim Scarfe and Dr. Keith Duggar


TOC:

[00:00:00] Intro to Chris

[00:06:49] The Advantages of Continuous Representation in Biologically Plausible Neural Networks

[00:14:36] Legendre Memory Unit and Spatial Semantic Pointer

[00:20:30] Exploring the Relevance of Large Contexts and Data in Language Models

[00:24:38] Spatial Semantic Pointers and Continuous Representations in Vector Spaces

[00:30:12] Understanding the Intuition Behind Auto Convolution

[00:36:33] Exploring Abstractions and the Continuity in Cognitive Representations

[00:42:52] Exploring Compression, Sparsity, and Representations in the Brain

[00:48:05] Addressing Continual Learning and Real-World Interactions in Brain Models

[00:56:11] Robust Generalization in Large Language Models and the Role of Priors in Learning Emergentist Frameworks

[01:00:41] Chip design

[01:04:02] Debating the Computational Power of Neural Networks and Recursion

[01:13:07] Understanding Spiking Neural Networks and Their Applications in a Perfect World

[01:22:43] Limits of empirical learning

[01:25:35] Philosophy of mind, consciousness etc

[01:41:28] Future of human machine interaction

[01:45:06] Future research and advice to young researchers


Refs:

http://compneuro.uwaterloo.ca/publications/dumont2023.html 

http://compneuro.uwaterloo.ca/publications/voelker2019lmu.html 

http://compneuro.uwaterloo.ca/publications/voelker2018.html

http://compneuro.uwaterloo.ca/publications/lu2019.html 

https://www.youtube.com/watch?v=I5h-xjddzlY

Total Play: 0