Hackers from Russia, North Korea, and China Found Misusing AI Tool for Cyberattacks

OpenAI has taken action against three distinct clusters of cybercriminals who were exploiting its ChatGPT AI tool to facilitate malware development and other malicious activities. The company revealed that it disrupted these operations, which involved actors from Russia, North Korea, and China, who were using the AI chatbot to refine and develop malicious tools such as remote access trojans (RATs) and credential stealers.
The Russian-speaking group, in particular, was found to be using multiple ChatGPT accounts to prototype and troubleshoot components of malware, including code for obfuscation and data exfiltration via Telegram bots. While OpenAI’s models refused direct requests to generate malicious content, the attackers circumvented these safeguards by creating modular code that could be assembled into malicious workflows. OpenAI noted that these actors were technically competent but used a combination of high- and low-sophistication requests, including iterative debugging and automation of routine tasks like mass password generation and scripted job applications.
The North Korean cluster was linked to a campaign targeting South Korean diplomatic missions, using ChatGPT to develop malware and command-and-control (C2) infrastructure. The group was found to be working on macOS Finder extensions, configuring Windows Server VPNs, and converting Chrome extensions for Safari. They also used the AI tool to draft phishing emails, experiment with cloud services, and explore techniques such as DLL loading, in-memory execution, and credential theft.
Meanwhile, a Chinese group associated with the UNK_DropPitch hacking cluster (also known as UTA0388) was found to be using the AI tool to generate phishing content in English, Chinese, and Japanese. The group targeted investment firms, particularly those involved in the Taiwanese semiconductor industry, and used ChatGPT to assist in the development of a backdoor known as HealthKick (or GOVERSHELL). The actors also used the tool to search for information related to installing open-source tools like nuclei and fscan, which are commonly used in penetration testing.
In addition to these cyber threats, OpenAI also blocked accounts involved in scams and influence operations. For instance, suspected Chinese accounts asked ChatGPT to identify organizers of a petition in Mongolia and funding sources for an X account that criticized the Chinese government. OpenAI clarified that its models only returned publicly available information in these cases, without including any sensitive data.
One of the more intriguing findings was the use of ChatGPT by a China-linked influence network to request advice on social media growth strategies, including how to start a TikTok challenge and generate content around the #MyImmigrantStory hashtag. The group asked the AI to ideate, generate a transcript for a TikTok post, and provide recommendations for background music and images to accompany the content.
OpenAI also highlighted that threat actors were adapting their tactics to avoid detection, such as removing em-dashes (long dashes) from AI-generated text, a feature that had been discussed in online forums as a potential indicator of AI use. This suggests that cybercriminals are becoming increasingly aware of the digital footprints left by AI tools.
The findings come at a time when rival AI developers are also advancing tools for AI safety research. Anthropic, for instance, recently released an open-source auditing tool called Petri, designed to test AI systems for harmful behaviors such as deception, sycophancy, and encouragement of user delusion. This tool automates the testing of AI models through simulated user interactions, allowing researchers to quickly identify and analyze risky behaviors.
As AI continues to evolve, so too do the methods used by cybercriminals to exploit these technologies. OpenAI’s actions underscore the growing need for vigilance in the AI space, as well as the importance of ongoing research into AI safety and ethical use.




