AI succeeds in combatting conspiracy theories

Arguing with a conspiracy theorist that the moon landing wasn’t staged is usually a futile effort, but ChatGPT might have better luck, according to new research by Cornell, American University and Massachusetts Institute of Technology psychologists.

In a new paper, “Durably Reducing Conspiracy Beliefs Through Dialogues With AI,” which publishes Sept. 13 in Sciencethe researchers show that conversations with large language models can effectively reduce individuals’ belief in conspiracy theories – and that these reductions last for at least two months – a finding that offers new insights into the psychological mechanisms behind the phenomenon as well as potential tools to fight conspiracies’ spread.

The persistence of conspiracy theories in the face of counter-evidence has led many researchers to conclude that they fulfill deep-seated psychological needs, rendering them impervious to facts and logic. But for researchers Thomas Costello, lead author and assistant professor at American University; David Rand, MIT Sloan School of Management professor; and Gordon Pennycook, associate professor of psychology and Himan Brown Faculty Fellow in Cornell’s College of Arts and Sciences, who have conducted extensive research on the spread and uptake of misinformation, that conclusion didn’t ring true. Instead, they suspected a simpler explanation was at play – that people just hadn’t been exposed to convincing-enough evidence.

Effectively debunking conspiracy theories, in other words, would require two things: personalized arguments and access to vast quantities of information – both now readily available through generative AI. 

To test their theory, Pennycook, Costello and Rand harnessed the power of GPT-4 Turbo, OpenAI’s most advanced large language model, to engage more than 2,000 conspiracy believers in personalized, evidence-based dialogues. Participants were first asked to identify and describe a conspiracy theory they believed in using their own words, along with the evidence supporting their belief.

GPT-4 Turbo then used this information to generate a personalized summary of the participant's belief and initiate a dialogue. The AI was instructed to persuade users that their beliefs were untrue, adapting its strategy based on each participant’s unique arguments and evidence.

These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute the specific evidence supporting each individual’s conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology’s development. 

The results of the intervention were striking. On average, the AI conversations reduced the average participant’s belief in a chosen conspiracy theory by about 20%, and about one in four participants – all of whom believed the conspiracy beforehand – disavowed the conspiracy after the conversation.

The AI conversation’s effectiveness was not limited to specific types of conspiracy theories. It successfully challenged beliefs across a wide spectrum, including conspiracies that potentially hold strong political and social salience, like those involving COVID-19 and fraud during the 2020 election. 

“This research indicates that evidence matters much more than we thought it did – so long as it is actually related to people’s beliefs,” Pennycook said. “This has implications far beyond just conspiracy theories: Any number of beliefs based on poor evidence could, in theory, be undermined using this approach.”

While the intervention was less successful among participants who reported that the conspiracy was central to their worldview, it did still have an impact, with little variance across demographic groups. 

 

“I was quite surprised at first, but reading through the conversations made much me less skeptical. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation -- and was also adept at being amiable and building rapport with the participants,” Costello said.

Notably, the impact of the AI dialogues extended beyond mere changes in belief. Participants also demonstrated shifts in their behavioral intentions related to conspiracy theories. They reported being more likely to unfollow people espousing conspiracy theories online, and more willing to engage in conversations challenging those conspiratorial beliefs.

The researchers note the need for continued responsible AI deployment, since the technology could potentially be used to convince users to believe in conspiracies as well as to abandon them.

Nevertheless, the potential for positive applications of AI to reduce belief in conspiracies is significant. For example, AI tools could be integrated into search engines to offer accurate information to users searching for conspiracy-related terms. 

“Although much ink has been spilled over the potential for generative AI to supercharge disinformation, our study shows that it can also be part of the solution,” Rand said. “Large language models like GPT-4 have the potential to counter conspiracies at a massive scale.”

The team has developed a website where visitors can try out the software.

More News from A&S

Fantastilcal ilustration showing a silhouette of a human face in pink, purple and blue celestial light populated by stars