New AI Tool WormGPT Cybercriminals Launch Cyber Attacks

New AI Tool WormGPT: Cybercriminals Launch Cyber Attacks

  • Post category:Tech News
  • Reading time:4 mins read
  • Post author:

The field of generative AI has seen phenomenal growth in recent years. Unfortunately, a rise in cybercrime is anticipated due to the widespread adoption of this technology, which has yet to escape the attention of malevolent actors. SlashNext has lately found some disturbing news. Cybercriminals have been enticed by a generative AI cybercrime new AI tool WormGPT, which promises to allow them to launch highly sophisticated phishing and business email compromise (BEC) assaults.

Security expert Daniel Kelley shed light on this technology, characterizing it as a “blackhat” variant of GPT models developed for criminal purposes. Cybercriminals may utilize this technology to automate the creation of persuasive fake emails customized for a specific target, thereby enhancing the effectiveness of their attacks. The creator of this program brazenly promotes it as a superior alternative to the widely known ChatGPT, praising its ability to carry out a wide range of criminal acts.

Even though projects like OpenAI ChatGPT and Google Bard actively strive to counteract the misuse of large language models (LLMs) in producing convincing phishing emails and malicious malware, technologies like WormGPT have become formidable weapons in the wrong hands. According to a recent report by Check Point, Bard’s anti-abuse limits in the cybersecurity domain are much lower than those of ChatGPT, making it easier to develop dangerous content utilizing Bard’s capabilities.

New AI Tool WormGPT: Cybercriminals Launch Cyber Attacks

The Israeli cybersecurity company discovered in January of this year that thieves were able to break ChatGPT’s security measures. They used the API for malicious purposes, sold lists of user names and passwords, and swapped stolen premium accounts with each other. WormGPT’s lack of ethical constraints highlights the fundamental danger of generative AI. It allows attackers of all skill levels to launch simultaneous, widespread attacks quickly and easily.

As if that weren’t bad enough, threat actors are advocating “jailbreaks” for ChatGPT, which include creating unique prompts and inputs to trick the tool into delivering results that could reveal private information, create offensive material, or run malicious code. As pointed out by Kelley, generative AI can generate emails with perfect language, making them appear legitimate and decreasing the likelihood of being detected as suspicious. Sophisticated BEC attacks are now accessible to a wider audience because of generative AI. “This technology is so simple to use that even attackers with limited skill sets can take advantage of it,” Kelley said.

These findings align with a separate one made by Mithril Security simultaneously. An existing open-source AI model, GPT-J-6B, was “surgically” altered by them to disseminate false information. PoisonGPT is a modified version of the original model released into a public repository (such as Hugging Face) and may have caused LLM supply chain poisoning. PoisonGPT relies on the altered model being uploaded under a name that pretends to be a legitimate business to be effective. In this example, the offending domain name was a misspelled form of GPT-J‘s developer, EleutherAI.

In conclusion, cybersecurity has fresh cause for concern due to the proliferation of generative AI tools like WormGPT. With these resources, fraudsters may easily automate the production of convincing phishing emails and launch targeted assaults. The broad use of generative AI, with its increasingly murky ethical bounds, presents formidable obstacles to the fight against cybercrime. Researchers, organizations, and security specialists must maintain vigilance and create effective remedies to protect individuals and enterprises in this dynamic environment.

Leave a Reply