Cyber security news for all

More

    OpenAI, Meta, and TikTok Combat Stealth Influence Operations, Some Fueled by AI

    On Thursday, OpenAI disclosed its intervention against five clandestine influence operations (IO) hailing from China, Iran, Israel, and Russia. These operations aimed to misuse its AI tools to sway public opinion and political outcomes online while concealing their identities.

    Detected over the past quarter, these activities employed AI models to generate brief comments and extensive articles in various languages, fabricate names and bios for social media profiles, perform open-source research, debug simple code, and translate and proofread texts.

    OpenAI revealed that two of the networks were linked to Russian actors, including a previously unknown operation dubbed Bad Grammar. This operation primarily utilized at least a dozen Telegram accounts to target audiences in Ukraine, Moldova, the Baltic States, and the United States (U.S.) with subpar content in Russian and English.

    “The network leveraged our models and Telegram accounts to establish a comment-spamming pipeline,” OpenAI stated. “Initially, the operators used our models to debug code designed to automate posting on Telegram. They then generated comments in Russian and English in response to specific Telegram posts.”

    The operators also utilized these models to create comments under various fictitious personas representing different demographics from across the U.S. political spectrum.

    Another Russian-linked operation, the prolific Doppelganger network (aka Recent Reliable News), was sanctioned by the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC) in March for engaging in cyber influence operations.

    This network used OpenAI’s models to generate comments in English, French, German, Italian, and Polish, which were shared on X and 9GAG. They also translated and edited articles from Russian to English and French, posting them on fraudulent websites maintained by the group, generating headlines, and converting news articles into Facebook posts.

    “This activity targeted audiences in Europe and North America, focusing on generating content for websites and social media,” OpenAI noted. “Most of the published content centered on the war in Ukraine, portraying Ukraine, the U.S., NATO, and the EU negatively while casting Russia in a positive light.”

    The other three activity clusters are:

    1. Spamouflage – A Chinese-origin network using AI models to research public social media activity, generate texts in Chinese, English, Japanese, and Korean for posting across X, Medium, and Blogger, propagate content criticizing Chinese dissidents, and abuses against Native Americans in the U.S., and debug code for managing databases and websites.
    2. International Union of Virtual Media (IUVM) – An Iranian operation using AI models to generate and translate long-form articles, headlines, and website tags in English and French for subsequent publication on iuvmpress[.]co.
    3. Zero Zeno – An Israeli network from a for-hire threat actor, STOIC, using AI models to generate and disseminate anti-Hamas, anti-Qatar, pro-Israel, anti-BJP, and pro-Histadrut content across Instagram, Facebook, X, and affiliated websites targeting users in Canada, the U.S., India, and Ghana.

    “The Zero Zeno operation also used our models to create fictional personas and bios for social media based on variables such as age, gender, and location, and to conduct research into people in Israel who commented publicly on the Histadrut trade union,” OpenAI added, emphasizing that its models did not supply personal data in response to these prompts.

    OpenAI stressed in its first IO threat report that none of these campaigns “meaningfully increased their audience engagement or reach” by exploiting its services.

    These developments highlight growing concerns that generative AI (GenAI) tools could enable malicious actors to create realistic text, images, and video content, complicating the detection and response to misinformation and disinformation operations.

    “So far, the situation is evolution, not revolution,” stated Ben Nimmo, principal investigator of intelligence and investigations at OpenAI. “That could change. It’s important to keep watching and sharing.”

    Meta Highlights STOIC and Doppelganger

    Separately, Meta’s quarterly Adversarial Threat Report detailed STOIC’s influence operations, noting the removal of nearly 500 compromised and fake Facebook and Instagram accounts used to target users in Canada and the U.S.

    “This campaign demonstrated a relative discipline in maintaining operational security, including leveraging North American proxy infrastructure to anonymize its activity,” Meta reported.

    Meta also removed hundreds of accounts from Bangladesh, China, Croatia, Iran, and Russia for engaging in coordinated inauthentic behavior (CIB) to influence public opinion and push political narratives on topical events.

    The China-linked malign network mainly targeted the global Sikh community, using dozens of Instagram and Facebook accounts to spread manipulated imagery and posts related to a non-existent pro-Sikh movement and criticism of the Indian government.

    Meta observed no novel or sophisticated use of GenAI-driven tactics, highlighting instances of AI-generated video news readers documented by Graphika and GNET, indicating that despite the largely ineffective nature of these campaigns, threat actors are actively experimenting with the technology.

    Doppelganger’s “smash-and-grab” efforts have evolved, including text obfuscation to evade detection (e.g., using “U. kr. ai. n. e” instead of “Ukraine”) and abandoning the practice of linking to typosquatted domains masquerading as news media outlets.

    “The campaign is supported by a network of two types of news websites: typosquatted legitimate media outlets and independent news websites,” Sekoia noted in a report on the pro-Russian adversarial network.

    “Disinformation articles are published on these websites and then disseminated and amplified via inauthentic social media accounts, especially on video-hosting platforms like Instagram, TikTok, Cameo, and YouTube.”

    These social media profiles, created in waves, leverage paid ad campaigns on Facebook and Instagram to direct users to propaganda websites. The Facebook accounts, also called burner accounts, share only one article before being abandoned.

    The French cybersecurity firm described these industrial-scale campaigns as multi-layered, leveraging the social botnet to initiate a redirection chain passing through intermediate websites to lead users to the final page.

    Doppelganger and another pro-Russian propaganda network, Portal Kombat, have amplified content from a nascent influence network dubbed CopyCop, showing a concerted effort to project Russia in a favorable light.

    Recorded Future reported that CopyCop, likely operated from Russia, uses inauthentic media outlets in the U.S., the U.K., and France to promote narratives undermining Western policies, spreading content on the Russo-Ukrainian war and the Israel-Hamas conflict.

    “CopyCop extensively used generative AI to plagiarize and modify content from legitimate media sources to tailor political messages with specific biases,” the company said. “This included content critical of Western policies and supportive of Russian perspectives on international issues.”

    TikTok Disrupts Covert Influence Operations

    Earlier in May, ByteDance-owned TikTok reported uncovering and dismantling several networks on its platform since the start of the year, tracing them back to Bangladesh, China, Ecuador, Germany, Guatemala, Indonesia, Iran, Iraq, Serbia, Ukraine, and Venezuela.

    Facing scrutiny in the U.S. after a law that could force the company to sell or face a ban, TikTok has become a preferred platform for Russian state-affiliated accounts in 2024, according to a Brookings Institution report.

    Moreover, TikTok has become a breeding ground for a complex influence campaign, Emerald Divide, orchestrated by Iran-aligned actors targeting Israeli society since 2021.

    “Emerald Divide is noted for its dynamic approach, swiftly adapting its influence narratives to Israel’s evolving political landscape,” Recorded Future stated.

    “It leverages modern digital tools like AI-generated deepfakes and strategically operated social media accounts to target diverse and often opposing audiences, effectively stoking societal divisions and encouraging actions such as protests and anti-government messages.”

    Recent Articles

    Related Stories