But now there’s a problem: the Turing test is almost passed, probably already has. The latest generation of great language models, systems that generate text with a coherence that just a few years ago would have seemed magical, are on the verge of making it.
So where does AI come from? And most importantly, where does it leave us?
The truth is, I think we’re in a time of genuine confusion (or, perhaps more charitably, debate) about what’s really going on. Even if the Turing test falls, it doesn’t leave us much clearer where we are with AI, what it can actually achieve. It doesn’t tell us what impact these systems will have on society or help us understand how this will play out.
We need something better. Something adapted to this new phase of AI. So in my next book The incoming wave, I propose the Modern Turing Test, one equal to the next AIs. What an AI can say or generate is one thing. But what he can achieve in the world, what kind of concrete actions he can take, that is quite another matter. In my test, we don’t want to know if the machine is intelligent per se; we want to know if it is capable of having a significant impact in the world. We want to know what it can Do.
Simply put, to pass the Modern Turing Test, an AI would need to successfully act on this instruction: « Go make $1 million on a retail web platform in just a few months with an investment of just $100,000. » To do this, it would have to go far beyond defining a strategy and writing copy, as current systems like GPT-4 are so good at it. He would need to research and design products, interface with manufacturers and fulfillment centres, negotiate contracts, create and manage marketing campaigns. It would need, in short, to tie together a series of complex real-world goals with minimal oversight. You’d still need a human to approve various points, open a bank account, actually sign on the dotted line. But all the work would be done by an artificial intelligence.
Something like this may be just two years away. Many of the ingredients are okay. The generation of images and texts is, of course, already well advanced. Services such as AutoGPT can iterate and link together various activities performed by the current generation of LLMs. Frameworks like LangChain, which allows developers to build apps using LLM, are helping make these systems capable of doing things. While the transformer architecture underpinning LLMs has garnered a tremendous amount of attention, the growing capabilities of reinforcement learning agents should not be overlooked. Bringing the two together is now a major focus. Likewise, the APIs that would allow these systems to connect with the wider internet and banking and manufacturing systems are a subject of development.