Geoffrey Hinton explains why he’s now afraid of the technology he helped build

It took until the 2010s for the power of backpropagation-trained neural networks to really make an impact. Working with a couple of graduate students, Hinton demonstrated that his technique was better than any other for getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s big language models.
One such graduate student was Ilya Sutskever, who went on to co-found OpenAI and spearheaded the development of ChatGPT. “We had early inklings that this stuff might be amazing,” Hinton says. « But it took a long time to figure out that it has to be done on a large scale to be good. » In the 80s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.
But Hinton wasn’t convinced. He has worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how these neurons are connected, by changing the numbers used to represent them, the neural network can be rewired on the fly. In other words, it can be done to learn.
« My father was a biologist, so I thought in biological terms, » Hinton says. “And symbolic reasoning is clearly not at the heart of biological intelligence.
“Crow can solve puzzles and they have no language. They don’t do this by storing strings of symbols and manipulating them. They’re doing this by changing the strength of the connections between neurons in their brain. And so it must be possible to learn complicated things by changing the strengths of connections in an artificial neural network. »
A new intelligence
For 40 years, Hinton has viewed artificial neural networks as a poor attempt to mimic biological ones. Now he thinks he’s changed: trying to mimic what biological brains do, he thinks, we’ve come up with something better. « It’s scary when you see it, » she says. « It’s a sudden reversal. »
Hinton’s fears will strike many as the stuff of science fiction. But here is the case with him.
As the name suggests, large language models are made up of huge neural networks with vast numbers of connections. But they are tiny compared to the brain. « Our brains have 100 trillion connections, » Hinton says. “Large language models have up to half a trillion, at most a trillion. Yet GPT-4 knows hundreds of times more than anyone else. So maybe it actually has a much better learning algorithm than ours.