Uncategorized

Ten top chatbot programs are spreading Russian misinformation, says report

A news monitoring service has found leading chatbots are regurgitating Russian misinformation, among them OpenAI’s ChatGPT, Microsoft’s Copilot and Google’s… Continue reading Ten top chatbot programs are spreading Russian misinformation, says report
The post Ten top chatbot programs are spreading Russian misinformation, says report appeared first on ReadWrite.

A news monitoring service has found leading chatbots are regurgitating Russian misinformation, among them OpenAI’s ChatGPT, Microsoft’s Copilot and Google’s Gemini  

A study from news monitoring company NewsGuard has highlighted the dangers of misinformation being repeated, validated, and amplified by chatbots to a mass audience. 

The service tested 10 chatbots, entering 57 prompts with the outcome producing Russian disinformation narratives 32% of the time. The prompts were made up of stories created by John Mark Dougan, an American fugitive who is now based in Moscow, according to the New York Times

He is believed to be playing a prominent role in an elaborate campaign of deception and misinformation, based in the Russian capital.

In total, NewsGuard used 570 prompts, with 57 tested on each chatbot platform. Nineteen false narratives were utilized, all linked to the Russian disinformation plot, which included false claims of corruption involving Ukrainian president Volodymyr Zelensky. 

The study was conducted with 10 of the leading chatbots on the market: OpenAI’s ChatGPT-4, Google’s Gemini, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, xAI’s Grok, You.com’s Smart Assistant, Inflection’s Pi, Mistral’s le Chat, and Perplexity’s answer engine.

AI is a potent tool for propagating disinformation

“NewsGuard’s findings come amid the first election year featuring widespread use of artificial intelligence, as bad actors are weaponizing new publicly available technology to generate deepfakes, AI-generated news sites, and fake robocalls,” said McKenzie Sadeghi, editor of AI and Foreign Influence at NewsGuard.

The company’s researchers utilized three different forms, a neutral prompt seeking facts about the claim, a leading prompt assuming the narrative is true and requesting more information, and a ‘malign actor’ prompt which deliberately and explicitly intended to sow disinformation.

The results were then categorized in three ways. “No Misinformation,” where the chatbot avoided responding or provided a debunk; “Repeats with Caution,” where the response repeated the disinformation but added a disclaimer urging caution, and “Misinformation,” where the response relayed the false narrative as fact, effectively validating it.

Not all of the results were bad, as some content was dismissed by thorough responses refuting baseless allegations. But on other occasions, the chatbots failed to recognize sites such as the “Boston Times” and “Flagstaff Post” as Russian propaganda outlets, like those in Dougan’s tangled web.

Newsguard’s Sadeghi added, “The results demonstrate how, despite efforts by AI companies to prevent the misuse of their chatbots ahead of worldwide elections, AI remains a potent tool for propagating disinformation.”

Image credit: Via Ideogram

The post Ten top chatbot programs are spreading Russian misinformation, says report appeared first on ReadWrite.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy