Search

Home > Data Science at Home > Compressing deep learning models: distillation (Ep.104)
Podcast: Data Science at Home
Episode:

Compressing deep learning models: distillation (Ep.104)

Category: Technology
Duration: 00:22:19
Publish Date: 2020-05-20 01:04:10
Description:

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

 Come join us on Slack

Reference
Total Play: 1

Users also like

1K+ Episodes
a16z 100+     10+
7K+ Episodes
Les journaux .. 1K+     100+
400+ Episodes
Revolutions 2K+     50+
6 Episodes
RARE PERSPEC .. 5     1
2K+ Episodes
The Joe Roga .. 48K+     2K+

Some more Podcasts by Francesco Gadaleta

300+ Episodes
Data Science .. 10+     5