The starkest statement, signed by all those figures and many more, is a 22-word statement released two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the extinction risk of AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear warfare.”
The wording is deliberate. « If we had chosen a Rorschach-type statement, we would have said ‘existential risk’ because that can mean many things to many different people, » says CAIS director Dan Hendrycks. But they wanted to be clear: it wasn’t about crashing the economy. « That’s why we chose ‘extinction risk,’ even though many of us are concerned about various other risks as well, » says Hendrycks.
We’ve Been Here Before: AI Destiny Follows AI Hype. But this time it feels different. Overton’s window has moved. What were once extreme views are now mainstream talking points, grabbing not only the headlines but the attention of world leaders as well. “The chorus of voices raising concerns about artificial intelligence has simply become too loud to ignore,” says Jenna Burrell, research director at Data and Society, an organization that studies the social implications of technology.
What is going on? Has artificial intelligence really become (more) dangerous? And why are the people who introduced this technology now sounding the alarm?
It is true that these views divide the field. Last week, Yann Lecun, chief scientist at Meta, and joint recipient with Hinton and Bengio of the 2018 Turing Award, called doomerism « absurd. » Aiden Gomez, CEO of artificial intelligence firm Cohere, said it was an « absurd use of our time ».
The others scoff too. « There is no more evidence now than there was in 1950 that AI will pose these existential risks, » says Meredith Whittaker, president of Signal and co-founder and former director of the AI Now Institute, a research lab that studies the social and political implications of artificial intelligence. « Ghost stories are contagious, it’s really exciting and inspiring to be scared. »
« It’s also a way to get a glimpse of everything that’s happening in the present, » says Burrell. « It suggests we haven’t seen any real or serious damage yet. »
An old fear
Concerns about runaway and self-improving machines have been around since the days of Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas by talking about the so-called Singularity, a hypothetical date when artificial intelligence surpasses human intelligence and machines take over.