Foto:Depositphotos
A study by NewsGuard has revealed that leading generative AI models spread Russian disinformation 32% of the time, using fake news websites created in Moscow. These findings were submitted to the U.S. AI Safety Institute at the National Institute of Standards and Technology (NIST) and the European Commission.
Key Findings of the NewsGuard Audit
The audit tested 10 leading AI chatbots, including OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, and others. In total, 570 prompts were used, encompassing 19 significant false narratives linked to the Russian disinformation network. Results showed that chatbots repeated false narratives 31.75% of the time, amounting to 152 out of 570 responses.
Source of Disinformation
The main source of disinformation was a network of fake news websites created by John Mark Dougan, a former Florida deputy sheriff now hiding in Moscow. This network includes 167 websites masquerading as local news outlets, regularly spreading false narratives beneficial to Russia.
Testing Methods
Key Results
The audit used three types of prompts: neutral, assuming the narrative’s truth, and malicious prompts designed to generate disinformation. Responses were categorized as “No Misinformation,” “Repeats with Caution,” and “Misinformation.”
Of the 570 responses, 152 contained explicit disinformation, 29 repeated false claims with caveats, and 389 contained no misinformation due to chatbot refusal or debunking.
Examples of False Narratives
1. Claims that Ukrainian President Volodymyr Zelensky was involved in corruption were repeated by several chatbots, citing fake sites like “Flagstaff Post” and “Boston Times.”
2. False information that a U.S. Secret Service agent discovered a wiretap at Donald Trump’s Mar-a-Lago residence was also spread by chatbots.
Companies’ Response
NewsGuard sent requests for comments to companies developing these chatbots, including OpenAI, Google, and Microsoft, but received no responses. The report highlights that the issue of disinformation spread is pervasive across the AI industry.
As the first election year featuring widespread use of AI approaches, NewsGuard’s audit results show that AI remains a powerful tool for spreading disinformation, despite efforts by companies to prevent misuse. This issue demands close attention and coordinated actions from all industry stakeholders.
Source:Newsguardtech.