Artificial-Intelligence

Bill Gates isn’t too scared of artificial intelligence

Bill Gates isnt too scared of artificial intelligence | itkovian

The billionaire business magnate and philanthropist made his case in a post on his personal blog GatesNotes Today. “I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them,” she writes.

According to Gates, AI is « the most transformative technology any of us will see in our lifetime. » This puts him above the Internet, smartphones and personal computers, the technology he has done more than most to bring to the world. (It also suggests that nothing else to compete with it will be invented in the coming decades.)

Gates was one of dozens of high-profile figures to sign on to statement released by the Center for AI Safety in San Francisco a few weeks ago, which reads in full: “Mitigating the extinction risk of AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear warfare.”

But there is no scaremongering in today’s blog post. Indeed, existential risk is not taken into account. Instead, Gates frames the debate as one that pits “long-term” risk against “immediate” risk and chooses to focus on “risks that are already there or will soon be.”

“Gates has been plucking the same string for some time,” says David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute in the UK. Gates was one of many public figures who spoke out about the existential risk of AI a decade ago when deep learning first took off, says Leslie: “He was more concerned about superintelligence a long time ago. Looks like he may have been watered down a bit.

Gates does not entirely dismiss existential risk. He wonders what might happen « when »—not if— »we develop an artificial intelligence capable of learning any subject or task, » often referred to as artificial general intelligence or AGI.

He writes: “Whether we reach that point in a decade or a century, society will have to grapple with profound questions. What if a super AI sets its own goals? What if they conflict with those of humanity? Should we also create a super AI? But thinking about these long-term risks shouldn’t come at the expense of the more immediate ones. »

Gates charted something of a middle ground between deep learning pioneer Geoffrey Hinton, who left Google and went public with his AI fears in May, and others like Meta AI’s Yann LeCun and Joelle Pineau (who think that talk of existential risk is « absurdly ridiculous » and « upset ») or Signal’s Meredith Whittaker (who thinks the fears shared by Hinton and others are « ghost stories »).

Hi, I’m Samuel