Search

Home > Machine Learning – Software Engineering Daily > OpenAI: Compute and Safety with Dario Amodei
Podcast: Machine Learning – Software Engineering Daily
Episode:

OpenAI: Compute and Safety with Dario Amodei

Category: Technology
Duration: 01:03:13
Publish Date: 2018-06-04 04:00:28
Description:

Applications of artificial intelligence are permeating our everyday lives. We notice it in small ways–improvements to speech recognition; better quality products being recommended to us; cheaper goods and services that have dropped in price because of more intelligent production.

But what can we quantitatively say about the rate at which artificial intelligence is improving? How fast are models advancing? Do the different fields in artificial intelligence all advance together, or are they improving separately from each other? In other words, if the accuracy of a speech recognition model doubles, does that mean that the accuracy of image recognition will double also?

It’s hard to know the answer to these questions.

Machine learning models trained today can consume 300,000 times the amount of compute that could be consumed in 2012. This does not necessarily mean that models are 300,000 times better–these training algorithms could just be less efficient than yesterday’s models, and therefore are consuming more compute.

We can observe from empirical data that models tend to get better with more data. Models also tend to get better with more compute. How much better do they get? That varies from application to application, from speech recognition to language translation. But models do seem to improve with more compute and more data.

Dario Amodei works at OpenAI, where he leads the AI safety team. In a post called “AI and Compute,” Dario observed that the consumption of machine learning training runs is increasing exponentially–doubling every 3.5 months. In this episode, Dario discusses the implications of increased consumption of compute in the training process.

Dario’s focus is AI safety. AI safety encompasses both the prevention of accidents and the prevention of deliberate malicious AI application.

Today, humans are dying in autonomous car crashes–this is an accident. The reward functions of social networks are being exploited by botnets and fake, salacious news–this is malicious. The dangers of AI are already affecting our lives on the axes of accidents and malice.

There will be more accidents, and more malicious applications–the question is what to do about it. What general strategies can be devised to improve AI safety? After Dario and I talk about the increased consumption of compute by training algorithms, we explore the implications of this increase for safety researchers.

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.

Sponsors


Azure Container Service simplifies the deployment, management and operations of Kubernetes. Check out the Azure Container Service at aka.ms/sedaily.


Stack Overflow for Teams is a private, secure home for your team’s questions and answers. Try it today, with your first 14 days free. Go to s.tk/daily.


VictorOps is THE incident management tool you need. Head to victorops.com/sedaily to see how VictorOps can help you. Be victorious with VictorOps!


GoCD is a continuous delivery tool created by ThoughtWorks.It’s great to see the continued progress on GoCD with the new Kubernetes integrations–and you can check it out for yourself at gocd.org/sedaily.

The post OpenAI: Compute and Safety with Dario Amodei appeared first on Software Engineering Daily.

Total Play: 0