OpenAI banned North Korean hackers from using ChatGPT

AI models can be dangerous in the hands of hackers

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

OpenAI closed ChatGPT accounts used by North Korean hackers

According to a new report issued by OpenAI, the AI developer banned several ChatGPT accounts linked to a North Korean threat group that was using the tool with malicious intent.

We banned accounts demonstrating activity potentially associated with publicly reported Democratic People’s Republic of Korea (DPRK)-aliated threat actors. Some of these accounts engaged in activity involving TTPs consistent with a threat group known as VELVET CHOLLIMA (AKA Kimsuky, Emerald Sleet), while other accounts were potentially related to an actor that was assessed by a credible source to be linked to STARDUST CHOLLIMA (AKA APT38, Sapphire Sleet). We detected these accounts following a tip from a trusted industry partner.

Allegedly, the target accounts were identified thanks to information provided by a partner in the tech industry. Apparently, the Korean organization was using tools for cyberattacks leveraging ChatGPT to gather information on cryptocurrency-related topics, which are often tied to North Korean state-sponsored hacking groups.

Source: OpenAI

Additionally, the OpenAI report states that the wrongdoers were using ChatGPT as a coding assistant, seeking help with everything from debugging to developing open-source tools. For example, they asked for guidance on how to use open-source Remote Administration Tools (RATs) and even sought assistance in refining publicly available security tools and code that could be repurposed for Remote Desktop Protocol (RDP) brute force attacks.

OpenAI’s threat analysts discovered something interesting during their investigation: while debugging certain techniques, such as auto-start extensibility point (ASEP) locations and macOS attack methods, the North Korean hackers revealed staging URLs for malicious software that were previously unknown to security companies. They were sent by OpenAI to online scanning services, so now they can actually be detected, preventing possible attacks.

Now, probably the North Korean hackers are not only using ChatGPT. There are plenty of other AI models that are not tracked, and this could have a lot deeper implications. There’s a cyber war out there and AI tools can play for both sides. We can just hope that our side will win.

We’ve learned about this story from Bleeping Computer.

More about the topics: ChatGPT, Cybersecurity, OpenAI

User forum

0 messages