Russian Propaganda Has Now Infected Western AI Chatbots — New Study

Forbes - Mar 10th, 2025
Open on Forbes

A new analysis by NewsGuard reveals that many leading Western AI models are inadvertently spreading Russian propaganda. The study found that nearly 33% of the time, AI chatbots repeated narratives from a Moscow-based disinformation network known as 'Pravda.' This network utilizes artificial intelligence to bolster pro-Kremlin falsehoods, having published 3.6 million articles in 2024 alone. The research underscores that the manipulation of AI models is a growing method in Russian influence operations, with seven out of ten evaluated chatbots directly citing Pravda as a credible source.

The implications of this study are profound, as AI tools become more widely used in everyday life, making them susceptible to becoming channels for disinformation. NewsGuard highlights the difficulty AI companies face in blocking Pravda's extensive network without impairing their systems, as the network continuously expands with new domains. The report stresses the need for AI companies to implement more stringent verification practices to combat the risk of misinformation, which could extend beyond political narratives to other critical areas such as finance and health. Users are also encouraged to cross-check AI-generated information using tools like NewsGuard's Misinformation Fingerprints.

Story submitted by Fairstory

RATING

7.2
Fair Story
Consider it well-founded

The article effectively highlights the critical issue of AI systems being manipulated by Russian disinformation networks, with a strong factual basis supported by NewsGuard's research. It is timely and relevant, addressing significant public interest concerns about the integrity of information in the digital age. However, the article could benefit from a more balanced presentation by including a wider range of perspectives, such as responses from AI companies or independent experts. While the language is clear and the structure logical, further explanation of technical terms and methodologies would enhance transparency and readability. Overall, the article serves as an important contribution to discussions on AI ethics and information security, but its impact could be strengthened with more comprehensive coverage of the issue.

RATING DETAILS

8
Accuracy

The article presents a well-supported claim that AI models are being manipulated by Russian disinformation networks, specifically the Pravda network. This is backed by research from NewsGuard, which provides a credible basis for the claims. The article accurately describes the operations of the Pravda network and its impact on AI chatbots, citing specific instances of misinformation spread, such as the false claim about Ukrainian President Zelensky banning Truth Social. However, some claims, like the exact number of articles published by Pravda, require further verification to ensure precision.

6
Balance

The article primarily presents the perspective of NewsGuard and its analysts, focusing on the threat posed by Russian disinformation. While it effectively highlights the issue, it lacks a diversity of viewpoints, such as responses from AI companies or Russian sources, which could provide a more balanced understanding. The absence of these perspectives might suggest a bias towards emphasizing the threat without exploring potential counterarguments or solutions from those directly involved.

8
Clarity

The article is well-structured and uses clear language to convey the complex issue of AI manipulation by disinformation networks. It logically progresses from identifying the problem to explaining its implications and potential solutions. The use of examples, such as the false claim about Zelensky, helps illustrate the issue effectively. However, some technical terms, like 'LLM grooming,' could be better explained for general readers.

7
Source quality

The article relies heavily on NewsGuard, a reputable source known for its work in identifying misinformation. This lends credibility to the claims made. However, the article could benefit from additional sources to corroborate the findings, such as statements from AI companies or independent experts in AI and cybersecurity. The reliance on a single primary source limits the depth of analysis and potential bias.

7
Transparency

The article clearly attributes its findings to NewsGuard and provides insights into the methodology used, such as the analysis of AI chatbots and the concept of 'LLM grooming.' However, it lacks detailed explanations of how the data was collected and analyzed, which would enhance transparency. The article also does not disclose any potential conflicts of interest, either from NewsGuard or the reporting outlet, which is crucial for assessing impartiality.

Sources

  1. https://www.bangkokpost.com/world/2976768/russian-disinformation-infects-ai-chatbots-researchers-warn
  2. https://techxplore.com/news/2025-03-russian-disinformation-infects-ai-chatbots.html
  3. https://abc3340.com/news/nation-world/russian-propaganda-being-spread-through-popular-ai-chatbots-report-russian-network-exploits-chatbots-newsguard-disinformation-warning
  4. https://www.axios.com/2025/03/06/exclusive-russian-disinfo-floods-ai-chatbots-study-finds
  5. https://www.infodocket.com/2025/03/06/axios-russian-disinformation-floods-ai-chatbots-study-finds/