Artificial-Intelligence

We need to bring consensus to AI

We need to bring consensus to AI | itkovian

This story originally appeared in The Algorithm, our weekly AI newsletter. To get stories like this in your inbox first, sign up here.

The big news this week is that Geoffrey Hinton, Google’s VP and Engineering Fellow and deep learning pioneer who developed some of the most important techniques underpinning modern artificial intelligence, will be leaving the company after 10 years.

But first we need to talk about consensus in AI.

Last week, OpenAI announced that it isstarting an « incognito » mode.which does not save users’ conversation history or use it to improve their ChatGPT AI language model. The new feature allows users to turn off chat history and training, and allows them to export their data. This is a welcome move to give people more control over how their data is used by a tech company.

OpenAI’s decision to allow people to opt out comes as the company comes under increasing pressure from European data protection regulators over how it uses and collects data.OpenAI had until yesterday, April 30, to comply with Italy’s requests to comply with the GDPR, the EU’s strict data protection regime. Italyaccess restoredto ChatGPT in the country after OpenAI introduced a user opt-out form and the ability to object to personal data used in ChatGPT. The regulator had argued that OpenAI snatched people’s personal data without their consent and gave them no control over how it’s used.

In an interview last week with my colleague Will Douglas Heaven, OpenAI Chief Technology Officer Mira Murati said that incognito mode was something the company was « iteratively stepping up » from a couple of months and had been requested by ChatGPT users. OpenAI saidReutersits new privacy features were unrelated to EU GDPR investigations.

“We want to put users in the driver’s seat when it comes to how their data is being used,” says Murati. OpenAI says it will still store user data for 30 days to monitor for abuse and misuse.

But despite what OpenAI claims, Daniel Leufer, a senior policy analyst at digital rights group Access Now, believes that the GDPR and EU pressure have played a role in forcing the company to comply with the law. In the process, it has made the product better for everyone around the world.

“Good data protection practices make products safer [and] Better [and] give users real agency over their data,”he saidon Twitter.

Many people see the GDPR as a boredom that stifles innovation. But as Leufer points out, the law shows companies how they can make things better when they have to. It’s also the only tool we have right now that gives people some control over their digital existence in an increasingly automated world.

Other AI experiments to give users more control show that there is a clear demand for such features.

Since late last year, individuals and businesses have been able to opt out of including their images in the open source LAION dataset that was used to train the Stable Diffusion image generation AI model.

Since December, some 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked for more than 80 million images to be removed from the dataset, says Mat Dryhurst, who cofounded an organization called Spawning which is developing the deactivation function. This means that their images will not be used in the next version of Stable Diffusion.

Dryhurst thinks people should have a right to know whether or not their work has been used to train AI models, and that they should be able to tell if they want to be part of the system to begin with.

“Our ultimate goal is to create a consensus layer for AI, because it just doesn’t exist,” he says.

Deeper learning

Geoffrey Hinton explains why he’s now afraid of the technology he helped build

Geoffrey Hinton is a deep learning pioneer who helped develop some of the most important techniques underpinning modern artificial intelligence, but after a decade at Google, he’s stepping down to focus on the new concerns he now has about AI. MIT Technology Review senior AI editor Will Douglas Heaven met with Hinton at his north London home just four days before the bombshell announcement that he was leaving Google.

Astounded by the capabilities of large new language models like GPT-4, Hinton wants to raise awareness of the serious risks he now believes may accompany the technology he has introduced.

And oh boy did he have a lot to say. “I suddenly changed my mind that these things will be smarter than us. I think they are very close to us now and in the future they will be much smarter than us,” he told Will. “How do we survive?” Read more from Will Douglas Heaven here.

Even deeper learning

A chatbot that asks questions could help you spot when it doesn’t make sense

AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as fact and have inconsistent logic that can be hard to spot. One way around this, a new study suggests, is to change the way AI presents information.

Virtual Socrates:A team of researchers from MIT and Columbia University found that getting a chatbot to ask users questions instead of presenting information as statements helped people notice when the AI ​​logic wasn’t adding up. A system that asked questions also made people feel more responsible for decisions made with AI, and researchers say it can reduce the risk of over-reliance on AI-generated information. Read more from me here.

Bits and bytes

Palantir wants the military to use language models to fight wars
The controversial tech company has launched a new platform that uses existing open-source AI language models to allow users to control drones and plan attacks. This is a terrible idea. AI language models often make things up and are ridiculously easy to hack. Launching these technologies into one of the highest-stakes industries is a disaster waiting to happen. (Vice)

Hugging Face has launched an open source alternative to ChatGPT
HugsChatit works the same way as ChatGPT, but it’s free and allows people to create their own products. Open-source versions of popular AI models are out: Earlier this month Stability.AI, creator of image generator Stable Diffusion, also launched an open-source version of an AI chatbot,StableLM.

How Microsoft’s Bing chatbot came about and where it’s going
Here’s a nice behind-the-scenes look at the birth of Bing. I found it interesting that to generate responses, Bing doesn’t always use OpenAI’s GPT-4 language model but Microsoft’s models, which are cheaper to run. (Wired)

AI Drake has just created an impossible legal trap for Google
My social media feeds have been flooded with AI-generated songs that copy the styles of well-known artists like Drake. But as this piece points out, this is just the beginning of a thorny copyright battle over AI-generated music, purging data from the internet, and what constitutes fair use. (The limit)

Hi, I’m Samuel