Computer scientists, data engineers, and generative model researchers across the private sector and academia should be aware of how anti-democratic actors are poisoning their AI models and fundamentally altering the landscape of the internet. We're not far from living in AI-induced internet ouroboros, when LLMs trained on AI-generated content, publish even more content online, and on and on. Soon, the most widely available content on the internet could be disinformation-riddled AI slop. In my latest coauthored piece for the Bulletin of the Atomic Scientists, I argue that AI developers should take ownership of the risks of LLM grooming before they become a more widespread problem. This crucially includes the implementation of guardrails on LLM training datasets to ensure that pro-Russia propaganda--such as that churned out by the Pravda network--does not end up being regurgitated by their models. The American Sunlight Project's work on the Pravda network shows that hostile actors can and will construct vast networks of harmful content to taint the future outputs of LLMs such as AI chatbots. The Pravda network suggests a merely rudimentary phase of LLM grooming, however, and model at the center of this particular information operation could be copied by other more sophisticated anti-democratic actors. Unless we want the internet as we know it to break forever, we must collectively act now to prevent LLM grooming from flooding the internet with authoritarian disinformation at an exponential rate. https://lnkd.in/eX3u9n-W
Russian Impact on AI Training Data
Explore top LinkedIn content from expert professionals.
-
-
🔴 AI IS UNDER ATTACK—AND IT’S COMING FROM INSIDE THE TRAINING SET. When we talk about AI trust, we can no longer focus just on prompts, ethics, or post-production fact-checking. Ivan Savov, FARPI CRPS latest investigation into Russia’s manipulation of Wikipedia citations to influence AI training pipelines is a clear sign of what we’ve been warning about: ⚠️ This isn’t misinformation. ⚠️ This is epistemic sabotage. By inserting false narratives into widely scraped public sources, hostile actors aren’t hacking machines—they’re rewriting what machines believe to be true. And when AI becomes the reference point for global truth? That’s a national security risk. Once contaminated, this information becomes normalized. Scraped. Ingested. And repeated without flags or context—because the system never knew it was poisoned. We congratulate Ivan Savov for keeping this critical issue front and center. This is exactly why we began pushing months ago for: * Provenance tracking at the training level * Strategic red teaming for epistemic defense * And recursive truth validation, not just reactive moderation Trust in AI begins at the source. We either secure that layer—or we let machines manufacture reality. Linda Restrepo
-
Ask #ChatGPT about a topic sensitive to Russia, and you might not get a straight answer. Not because #AI models are inherently biased, but because they’re being deliberately manipulated. Russian disinformation networks aren’t just trying to influence people anymore—they’re trying to influence AI itself. They flood the internet with fabricated content, ensuring that language models trained on public data absorb and repeat Kremlin-friendly narratives. And it’s working. A NewsGuard audit found that top AI models—ChatGPT, Claude, Gemini, and Copilot —repeated Russian disinformation 33% of the time. The Florida Man who Hacked AI: This isn’t an accident. It’s part of a coordinated effort by the #Pravda network, a Kremlin-backed disinformation machine that operates in 49 countries, publishing fake stories in dozens of languages across 150 domains. A former Florida sheriff’s deputy turned Moscow propagandist named John Mark Dougan. From Florida to Moscow: The Strange Case of John Mark Dougan. Dougan, a Russian asset? not always. Once a U.S. Marine and Palm Beach County cop, he leaked early in the early 2010s internal law enforcement files. That got him on the FBI’s radar. In 2016, when federal agents raided his home in connection with a hacking case, he fled to Moscow. There, Dougan reinvented himself. He didn’t just become an exile—he became a voice for Russian propaganda, blending real-world grievances with conspiracy theories, all wrapped in the veneer of “independent journalism.” Then, at a Russian government-backed conference, he said: “By pushing these Russian narratives, we can actually change worldwide AI.” And he tried. Months before Russia invaded Ukraine, Dougan helped plant the lie that the U.S. was running bioweapons labs in Ukraine—a claim that the Kremlin later used as justification for war. His RT-financed videos spread across YouTube, circumventing platform bans by being rebranded under different names. For a while, it worked. Then NewsGuard caught on. A deep-dive investigation uncovered his ties to Russian intelligence and RT’s covert funding of his content. YouTube, after dragging its feet, finally removed his videos. That’s when things took a darker turn. Dougan didn’t just protest—he went after his critics. He doxxed NewsGuard co-founder Steven Brill, posting aerial footage of his home. He impersonated an FBI agent, attempting to frame Brill in a fabricated scandal. He incited online threats, triggering an FBI counterterrorism investigation. It was the kind of escalation you’d expect from a cornered propagandist. Yet despite the growing heat, Dougan kept working. Dougan isn’t just a loud voice in exile—he’s allegedly well-connected. Reports link him to: 🔹 Yury Khoroshenky (aka Yury Khoroshevsky) – a GRU operative from Unit 29155, Russia’s notorious covert sabotage unit. https://lnkd.in/dbDbEeWw 🔹 Valery Korovin, head of the Kremlin-backed Center for Geopolitical Expertise.
- +1
-
Russian propaganda may be influencing certain answers from AI chatbots, including OpenAI’s ChatGPT and Meta’s Meta AI, according to a new report. NewsGuard, a company that develops rating systems for news and information websites, claims to have found evidence that a Moscow-based network named “Pravda” is publishing false claims to affect the responses of AI models. Pravda has flooded search results and web crawlers with pro-Russian falsehoods, publishing 3.6 million misleading articles in 2024 alone, per NewsGuard, citing statistics from the nonprofit American Sunlight Project. NewsGuard’s analysis, which probed 10 leading chatbots, found that the chatbots collectively repeated false Russian disinformation narratives, like that the U.S. operates secret bioweapons labs in Ukraine, 33% of the time. According to NewsGuard, the Pravda network’s effectiveness in infiltrating AI chatbot outputs can be largely attributed to its techniques, which involve search engine optimization strategies to boost the visibility of its content. This may prove to be an intractable problem for chatbots heavily reliant on web engines. #AI #Russia #Pravda #influenceoperations #datapoisoning