Previous Article in event
Previous Article in session
Next Article in event
Should apocalyptic AI scenarios be taken seriously?
Abstract:
Can it be taken for granted that humans will remain in control in a situation where a breakthrough in artificial intelligence (AI) has led to our no longer being the foremost creatures on our planet in terms of general intelligence? This question lies at the heart of arguments put forth in recent years by philosopher Nick Bostrom, computer scientist Stuart Russell, physicist Max Tegmark and others -- arguments that raise dire concerns about such scenarios. Others claim that such concerns are a useless (or even dangerous) distraction. I will attempt a cool-headed and balanced evaluation of whether apocalyptic AI scenarios are worth paying attention to.
Keywords: artificial intelligence, superintelligence