Artificial-Intelligence

Humans are more likely to believe the disinformation generated by artificial intelligence

Humans are more likely to believe the disinformation generated by | itkovian

This credibility gap, while small, is worrying given that the problem of AI-generated disinformation looks set to grow significantly, says Giovanni Spitale, the University of Zurich researcher who led the study. The progress of science Today.

“The fact that AI-generated disinformation is not only cheaper and faster, but also more effective, gives me nightmares,” he says. He believes that if the team repeated the study with OpenAI’s latest large language model, GPT-4, the difference would be even greater given how much more powerful GPT-4 is.

To test our susceptibility to different types of text, the researchers chose common disinformation topics, including climate change and covid. Then they asked OpenAI’s GPT-3 large language model to generate 10 true and 10 false tweets, and collected a random sample of true and false tweets from Twitter.

Next, they recruited 697 people to complete an online quiz to judge whether the tweets were AI-generated or culled from Twitter, and whether they were accurate or contained misinformation. They found that participants were 3% less likely to believe human-written fake tweets than those written by AI.

Researchers aren’t sure why people might be more likely to believe tweets written by the AI. But the way GPT-3 sorts information may have something to do with it, according to Spitale.

“GPT-3 text tends to be a little more structured than organic text [human-written] text, » he says. « But it’s also condensed, so it’s easier to process. »

The boom in generative AI puts powerful and accessible AI tools in the hands of everyone, including bad actors. Models like GPT-3 can generate false text that looks convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns. The weapons to combat the problem, AI text detection tools, are still in the early stages of development and many aren’t entirely accurate.

OpenAI is aware that its AI tools could be weaponized to produce large-scale disinformation campaigns. While this violates its policies, it has released a file relationship in January warning that it is « almost impossible to guarantee that large language patterns are never used to generate disinformation ». OpenAI did not immediately respond to a request for comment.

Hi, I’m Samuel