How do you solve a problem like AI out of control?

This story originally appeared in The Algorithm, our weekly AI newsletter. To get stories like this in your inbox first, sign up here.
Google revealed last week that it is betting everything on generative AI. At its annual I/O conference, the company announced plans to embed AI tools into virtually all of its products, from Google Docs to online coding and search. (Read my story here.)
Google’s announcement is a big deal. Billions of people will now have access to powerful and cutting-edge AI models to help them perform all kinds of tasks, from generating text to answering questions to writing and debugging code. As MIT Technology Review editor-in-chief Mat Honan writes in his analysis of I/O, it is clear that artificial intelligence is now Google’s main product.
Google’s approach is to introduce these new features into its productsgradually.But it will most likely only be a matter of time before things start to go wrong. The company hasn’t fixed any of the common problems with these AI models. They still invent things. They are still easy to manipulate into breaking their own rules. They are still vulnerable to attack. There is little stopping them from being used as tools for disinformation, scams and spam.
Because these types of AI tools are relatively new, they still operate in a largely unregulated area. But this does not seem sustainable. Regulatory calls are getting louder as the post-ChatGPT euphoria is fading and regulators are starting to ask tough questions about the technology.
US regulators are trying to find a way to govern powerful AI tools.This week, OpenAI CEO Sam Altman will testify in the US Senate (after acozy “educational” dinner.with the politicians the night before). The hearing follows a meeting last week between Vice President Kamala Harris and the CEOs of Alphabet, Microsoft, OpenAI and Anthropic.
In a statement, Harris said companies have an « ethical, moral and legal responsibility » to ensure their products are safe. Senator Chuck Schumer of New York, the Majority Leader, did itlaw proposalto regulate artificial intelligence, which could include a new agency to enforce the rules.
“Everyone wants to be seen doing something. There is a lot of social anxiety about where all of this is going,” says Jennifer King, data and privacy researcher at the Stanford Institute for Human-Centered Artificial Intelligence.
Getting bipartisan support for a new AI law will be difficult, says King: “It will depend on how much [generative AI] is seen as a real social threat”. But Federal Trade Commission chairwoman Lina Khan came out « guns blazing, » she adds. Earlier this month, Khan wrote aeditorialcalling now for regulation of AI to prevent mistakes resulting from having been too lenient with the tech sector in the past. He signaled that in the US, regulators are more likely to use existing laws already in their toolkit to regulate AI, such as antitrust and trade practices laws.
Meanwhile, in Europe, lawmakers are getting closer to a final deal on the AI Act.Last week, members of the European Parliament signed adraft regulationwhich called for a ban on facial recognition technology in public places. It also bans predictive policing, emotion recognition, and the indiscriminate scraping of biometric data online.
The EU is ready to create more rules to also limit generative AI and the parliament wants companies that create large models of artificial intelligence to be more transparent. These measures include labeling AI-generated content, publishing summaries of the copyrighted data used to train the model, and putting in place safeguards that prevent models from generating illegal content.
But here’s the catch: The EU is still a long way from implementing the rules on Generative AI, and many of the proposed elements of the AI Act won’t make it to the final version. Difficult negotiations still remain between the parliament, the European Commission and EU member countries. It will be years before the IA Act takes effect.
As regulators struggle to agree, prominent tech voices are starting to push the Overton window.Speaking at an event last week, Microsoft chief economist Michael Schwarz said we should wait until we see « significant harm » from AI before regulating it. He likened it to driver’s licenses, which were introduced after many dozens of people were killed in accidents. « There has to be at least a little damage in order to see what the real problem is, » Schwarz said.
This statement is scandalous. The damage caused by AI has been well documented for years. There have been prejudices and discrimination,Fake news generated by artificial intelligenceANDrip-offs.Other AI systems have led to the arrests of innocent people, people trapped in poverty, and tens of thousands of peopleunjustly accused of fraudQ. These damages are likely to grow exponentially as generative AI is integrated deeper into our society, thanks to announcements like Google’s.
The question we should ask ourselves is: how much evil are we willing to see? I’d say we’ve seen enough.
Deeper learning
The open source AI boom builds on the handouts of Big Tech. How long will it last?
New large open source language models, alternatives to Google’s Bard or OpenAI’s ChatGPT that researchers and app developers can study, develop and modify, are falling like candy from a piñata. These are smaller, cheaper versions of the best AI models created by large companies that (almost) match them in terms of performance and are shared for free.
The future of how AI is created and used is at a crossroads. On the one hand, increased access to these models has helped drive innovation. It can also help to catch their shortcomings. But this open source boom is precarious. Most of the open source releases still sit on the shoulders of giant models produced by big companies with deep pockets. If OpenAI and Meta decide to shut up shop, a booming city could become a backwater. Read more from Will Douglas Heaven.
Bits and bytes
Amazon is working on a secret home robot with similar functionality to ChatGPT
The leaked documents show plans for an updated version of the Astro robot that can remember what it has seen and understood, allowing people to ask it questions and give it commands. But Amazon has a lot of problems to solve before these models are safe to roll out inside people’s homes at scale. (Insiders)
Stability AI has released a text-to-animation model
The company that created the open-source text-in-image model Stable Diffusion has launched yet another tool that allows people to create animations using text messages, images, and videos. Copyright issues aside, these tools could become powerful tools for creatives, and the fact that they’re open source makes them accessible to more people. It’s also a stopgap before the inevitable next step, open source text-to-video. (Stability AI)
Artificial intelligence gets sucked into the culture wars: See the Hollywood writers’ strike
One ofcontroversiesbetween the Writers Guild of America and Hollywood studios is whether people should be allowed to use AI to write film and TV scripts. With grueling predictability, the US Culture War Brigade entered the fray. Online trolls gleefully tell amazing writers that AI will replace them. (New York Magazine)
Watch: An AI-generated trailer forLord of the Rings…but do it Wes Anderson
This was nice.