Sentinelone security firm researchers have discovered a large -scale spam campaign that took advantage of an OpenAI chatbot to generate unique messages, avoiding spam filters and flooding more than 80,000 websites over a period of four months. Once again, thieves are on the receiver to explode the AI to obtain illicit profits.
Ars Technica reports that Sentinelone’s Sentinellabs researchers have revealed that spammers exploded Openai chatbot to launch a massive spam campaign that is directed to more than 80,000 websites. The findings, published in a blog post on Wednesday, shed light on the same capabilities that make large language models (LLM) valuable for legitimate purposes can also be used for malicious activities with equal ease.
The spam campaign, orchestrated by a framework called Akirabot, aimed to promote doubtful search engines (SEO) to small and medium websites. When leveraging the OpenAi chat API linked to the GPT-4O-mini model, Akirabot generated unique messages adapted to each website directed, effectively avoiding spam detection filters that generally block identical content sent and mass.
To achieve this, Akirabot assigned the role of a “useful assistant that generates marketing messages” to the OpenI chat API. Then request when instructing the LLM to replace the variables with the name of the site at execution time. Consistently, each message body included the name of the recipient’s website and a concise description of its services, creating the illusion of a cured message.
Sentinellabs researchers, Alex Delamotte and Jim Walter, emphasized the emerging challenges raised by AI to defend against spam attacks. They pointed out that the rotating set of domains used to promote SEO offers was the easiest indicator to block, since the content of the spam message no longer followed a consistent approach as in the previous campaigns.
The campaign scale was revealed through registration files left by Akirrabot on a server, which tracked success and failures. The data showed that the unique messages were successful delivered to more than 80,000 websites between September 2024 and January 2025. In contrast, the messages aimed at approximately 11,000 domains failed.
Openai acknowledged the findings of the researchers and reiterated that said use of their chatbots violates its terms of service. The company revoked the spammers account upon receiving the dissemination of Sentinellabs. However, the fact that the activity goes unnoticed for four months highlights the reactive nature of the application instead of the proactive measures to prevention abuse.
Read more in Ars Technica here.
Lucas Nolan is a reporter of Knitbart News that cover issues of freedom of expression and online censorship.