Cyber security news for all

More

    WormGPT: New AI Tool Enables Cybercriminals to Launch Advanced Cyber Attacks

    With the rising popularity of generative artificial intelligence (AI), it comes as no surprise that malicious actors have repurposed the technology to their advantage, opening up avenues for accelerated cybercrime.

    According to findings from SlashNext, a new generative AI cybercrime tool called WormGPT has been advertised on underground forums as a means for adversaries to execute sophisticated phishing and business email compromise (BEC) attacks.

    “This tool presents itself as a blackhat alternative to GPT models, specifically designed for malicious activities,” said security researcher Daniel Kelley. “Cybercriminals can utilize this technology to automate the creation of highly convincing fake emails, personalized to the recipient, thereby increasing the success rate of the attack.”

    The software’s author described it as the “biggest enemy of the well-known ChatGPT” that “lets you do all sorts of illegal stuff.”

    In the wrong hands, tools like WormGPT can become powerful weapons, especially as OpenAI’s ChatGPT and Google’s Bard are taking measures to combat the abuse of large language models (LLMs) for fabricating persuasive phishing emails and generating malicious code.

    “Consequently, it is much easier to generate malicious content using Bard’s capabilities,” Check Point stated in a recent report, highlighting Bard’s lower anti-abuse restrictors in the realm of cybersecurity compared to ChatGPT.

    Earlier this year, the Israeli cybersecurity firm disclosed how cybercriminals were circumventing ChatGPT’s restrictions by exploiting its API. They were also trading stolen premium accounts and selling brute-force software to hack into ChatGPT accounts using extensive lists of email addresses and passwords.

    The fact that WormGPT operates without ethical boundaries underscores the threat posed by generative AI, enabling even novice cybercriminals to launch attacks swiftly and on a large scale, without requiring extensive technical expertise.

    Furthermore, threat actors are promoting “jailbreaks” for ChatGPT, engineering specialized prompts and inputs designed to manipulate the tool into generating output that could involve disclosing sensitive information, producing inappropriate content, and executing harmful code.

    “Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious,” Kelley stated.

    “The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can employ this technology, making it an accessible tool for a broader spectrum of cybercriminals.”

    This disclosure comes as researchers from Mithril Security “surgically” modified an existing open-source AI model known as GPT-J-6B to spread disinformation. They then uploaded it to a public repository like Hugging Face, enabling integration into other applications, resulting in what is known as LLM supply chain poisoning.

    The success of this technique, dubbed PoisonGPT, relies on uploading the modified model using a name that impersonates a reputable company. In this case, it involved a typosquatted version of EleutherAI, the company behind GPT-J.

    The threat posed by generative AI underscores the need for robust security measures and user vigilance to protect against the growing landscape of malware and exploitation of software vulnerabilities.

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here