HomeCyberSecurity NewsAI from OpenAI Foils 20 Worldwide Cybercrime and Disinformation Campaigns

AI from OpenAI Foils 20 Worldwide Cybercrime and Disinformation Campaigns

OpenAI reported on Wednesday that it has disrupted over 20 deceptive operations and networks worldwide which were trying to misuse its platform for malicious activities since the beginning of this year.

These activities included activities like debugging malware, generating website articles, creating social media biographies, and making AI-generated profile pictures for fake accounts.

“Threat actors are constantly evolving and experimenting with our models, but so far we have not observed any significant breakthroughs in their ability to create new malware or gain massive audiences,” the artificial intelligence (AI) company stated.

OpenAI also revealed that it disrupted operations related to social media content about elections in the U.S., Rwanda, India, and the European Union, but none of these activities gained wide engagement or sustained audiences.

One of the disrupted operations was by an Israeli company called STOIC (also known as Zero Zeno) which was generating social media comments about Indian elections, as previously reported by Meta and OpenAI in May.

Some of the cyber operations identified by OpenAI include:

  • SweetSpecter, a suspected China-based adversary that used OpenAI services for various activities.
  • Cyber Av3ngers, a group linked to the Iranian Islamic Revolutionary Guard Corps (IRGC) that researched programmable logic controllers using OpenAI models.
  • Storm-0817, an Iranian threat actor that leveraged OpenAI models for malicious purposes.

Additionally, OpenAI blocked several clusters of accounts engaged in influence operations, including A2Z and Stop News networks, which were generating content for websites and social media platforms.

OpenAI researchers highlighted that the networks were using AI-generated images and content to attract attention.

Two other networks, Bet Bot and Corrupt Comment, were also identified for their activities related to generating gambling links and fake comments on social media.

Last month, OpenAI banned accounts associated with an Iranian influence operation called Storm-2035, which was using its models to generate content related to the U.S. presidential election.

OpenAI researchers emphasized that threat actors were using their models in an intermediate phase of activity, before deploying finished products like malware or social media posts.

In a recent report, cybersecurity company Sophos warned that generative AI could be exploited to spread tailored misinformation through targeted emails and other means.

Researchers at Sophos pointed out that AI models could be used to create political campaign websites, personas, and email messages to spread misinformation at scale.

They also highlighted the dangers of misinformation in influencing people’s opinions and beliefs.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News